CN111046698A - Visual positioning method and system for visual editing - Google Patents

Visual positioning method and system for visual editing Download PDF

Info

Publication number
CN111046698A
CN111046698A CN201811189739.6A CN201811189739A CN111046698A CN 111046698 A CN111046698 A CN 111046698A CN 201811189739 A CN201811189739 A CN 201811189739A CN 111046698 A CN111046698 A CN 111046698A
Authority
CN
China
Prior art keywords
key frame
visual
abnormal
key
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811189739.6A
Other languages
Chinese (zh)
Other versions
CN111046698B (en
Inventor
刘哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuineng Robot Shanghai Co ltd
Original Assignee
Zhuineng Robot Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuineng Robot Shanghai Co ltd filed Critical Zhuineng Robot Shanghai Co ltd
Priority to CN201811189739.6A priority Critical patent/CN111046698B/en
Priority to PCT/CN2019/108294 priority patent/WO2020073818A1/en
Publication of CN111046698A publication Critical patent/CN111046698A/en
Application granted granted Critical
Publication of CN111046698B publication Critical patent/CN111046698B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/38Electronic maps specially adapted for navigation; Updating thereof
    • G01C21/3804Creation or updating of map data
    • G01C21/3833Creation or updating of map data characterised by the source of data
    • G01C21/3837Data obtained from a single source
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention provides a visual positioning method and a visual positioning system for visual editing, which are used for creating a visual editing interface and displaying key frames or characteristic information thereof after acquiring key frames and the characteristic information of the key frames of a scene image, so that a user who is not skilled in programming language can intervene in the scene construction and positioning process, the accuracy of scene construction and positioning is improved, and the construction efficiency is improved to adapt to a complex and changeable environment. In addition, the abnormal frames and the abnormal connecting lines are further screened out, so that the checking and editing can be facilitated.

Description

Visual positioning method and system for visual editing
Technical Field
The application relates to the technical field of computer images, in particular to a visual positioning method and a visual positioning system for visual editing.
Background
As computer vision becomes an increasingly important technology development point in the robot field, real-time positioning and Mapping (SLAM) technology becomes a key technology of the industry due to its full autonomy. However, the inventor of the present invention finds that the existing SLAM technology has poor adaptability to complex and variable environments, has a high error rate for environment identification, is not in accordance with the real environment, and still has a large number of error frames in the map construction process to affect the accuracy of map construction and positioning.
Disclosure of Invention
The invention aims to provide a visual positioning method and a visual positioning system for visual editing, which improve the accuracy and efficiency of visual positioning and scene map construction.
To achieve the above technical object, some embodiments of the present invention disclose a visual positioning method for visual editing, comprising:
acquiring a key frame of a scene image and characteristic information of the key frame, wherein the characteristic information comprises position information of the key frame;
creating a visual editing interface, and displaying icons representing key frames on the visual editing interface according to the position information of the key frames;
displaying the key frames and/or feature information of the key frames for editing after detecting selection of the icons representing the key frames; and
and constructing the visual positioning map through the edited key frames and/or the characteristic information of the key frames.
Certain embodiments of the present invention also disclose a visual positioning system for visual editing, comprising:
the first acquisition module is used for acquiring a key frame of a scene image and characteristic information of the key frame, wherein the characteristic information comprises position information of the key frame;
the creating module is used for creating a visual editing interface and displaying icons representing key frames on the visual editing interface according to the position information of the key frames acquired by the first acquiring module;
a detection module to detect selection of an icon representing a key frame;
the display module is used for displaying the key frames and/or the characteristic information of the key frames for editing after the detection module detects the selection of the icons representing the key frames; and
and the construction module is used for constructing the visual positioning map through the edited key frames and/or the feature information of the key frames.
In one embodiment, when the position information of a plurality of key frames is the same, the plurality of key frames can be formed into a key frame list and icons representing the key frame list can be displayed on the visual editing interface. After detecting selection of an icon representing the key frame list, displaying a plurality of key frames in the key frame list and/or feature information of the plurality of key frames for editing. And a plurality of key frames with the same position information form a key frame list, so that batch processing is facilitated.
In one embodiment, the feature information of each key frame in the key frame list may be compared with the feature information of other adjacent key frames in the key frame list in similarity to obtain a key frame with the highest similarity to the feature information of each key frame. Then, whether the similarity difference between each key frame and the key frame with the highest similarity exceeds a first preset threshold value is judged, the key frame with the similarity difference with the key frame with the highest similarity exceeding the first preset threshold value is marked as an abnormal frame, and suspicious frames can be screened out from a plurality of key frames with the same position information.
In one embodiment, an exception frame list including the one or more screened exception frames may be provided on a visual editing interface. After the selection of the abnormal frame list is detected, the icon representing the key frame list containing the abnormal frame is displayed on the visual editing interface in a different mode from the key frame list not containing the abnormal frame, so that the view and the editing can be facilitated.
In one embodiment, the feature information may further include displacement and deflection information of the key frame and the neighboring key frame. And displaying a connecting line connecting the icon representing the key frame and the icon representing the adjacent key frame on the visual editing interface, and displaying displacement and deflection information of the key frame and the adjacent key frame for editing after detecting the selection of the connecting line. Two adjacent key frames are linked, so that the relationship between the adjacent key frames can be conveniently adjusted.
In one embodiment, it may be determined whether the number of feature point matching pairs between a key frame and an adjacent key frame exceeds a second preset threshold, and a connection line between the key frame and the adjacent key frame, where the number of feature point matching pairs does not exceed the second preset threshold, is calibrated as an abnormal connection line, so as to screen out a suspicious connection line.
In one embodiment, a list of abnormal connections including one or more screened-out abnormal connections may be provided on a visual editing interface. After the selection of the abnormal connecting line list is detected, one or more abnormal connecting lines are displayed on the visual editing interface in a mode different from the non-abnormal connecting lines, and the viewing and editing can be facilitated.
In one embodiment, the key frame is linked with the adjacent key frame through a connecting line, and when the feature information of the key frame is edited, the feature information of the adjacent key frame is correspondingly edited according to the displacement and deflection information of the key frame and the adjacent key frame, so that the two adjacent key frames are modified in a linkage manner.
In one embodiment, the visual localization method is a SLAM method. In one embodiment, the visual positioning system is a SLAM system. The flexible implementation of the visual editing interface is provided in the SLAM method and the system, so that the recognition accuracy can be ensured to adapt to a complex and changeable environment while the mobile robot has certain autonomy.
In one embodiment, editing the keyframes and/or feature information of the keyframes includes adding, moving, and/or removing the keyframes, and/or modifying the feature information of the keyframes, wherein the feature information includes coordinates of feature points, descriptors of the feature points, and/or bag-of-word values of the feature points.
In one embodiment, editing of the keyframes and/or feature information of the keyframes includes adding, moving, and/or removing links between the keyframes and adjacent keyframes, and/or modifying displacement and deflection information of the keyframes and adjacent keyframes.
Compared with the prior art, the embodiments of the invention have the main differences and the effects that:
in the invention, after the key frame of the scene image and the characteristic information of the key frame are obtained, the visual editing interface is created and the key frame or the characteristic information thereof is displayed, so that a user who is not proficient in programming language can intervene in the scene construction and positioning process, the accuracy of the scene construction and positioning is improved, and the construction efficiency is improved so as to adapt to a complex and changeable environment.
It is to be understood that within the scope of the present invention, the above-described technical features of the present invention and the technical features specifically described below (e.g., embodiments and examples) may be combined with each other to constitute new or preferred technical solutions. Not to be reiterated herein, but to the extent of space.
Drawings
Fig. 1 is a flowchart illustrating a visual positioning method for visual editing according to an exemplary embodiment of the present application.
Fig. 2 is a flow chart illustrating a process of screening abnormal frames according to an exemplary embodiment of the present application.
Fig. 3 is a flow chart illustrating displaying an abnormal frame according to an example embodiment of the present application.
FIG. 4 is a schematic flow chart of linking a key frame with a neighboring key frame according to an example embodiment of the present application.
Fig. 5 is a flowchart illustrating screening for abnormal connections according to an exemplary embodiment of the present application.
Fig. 6 is a flowchart illustrating an abnormal connection according to an exemplary embodiment of the present application.
Fig. 7 is a block diagram of a visual positioning system for visual editing according to an example embodiment of the present application.
Fig. 8 is a block diagram of an abnormal frame screening subsystem according to an example embodiment of the present application.
FIG. 9 is a block diagram of an exception link screening subsystem according to an exemplary embodiment of the present application.
Fig. 10 is a schematic diagram of a visual editing interface according to an example embodiment of the present application.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
With the development of mobile robot technology, although some improvements are made on SLAM technology as an important technical component of autonomous navigation of a mobile robot, the SLAM technology still needs to be improved in a changeable and complex scene. At present, under the environment such as logistics transportation, the identification error rate of the existing SLAM technology is very high, and the identification error rate of video frames reaches more than 40%. A large number of error frames have great influence on the accuracy of map construction and visual positioning, and the precision and the efficiency of the visual navigation of the mobile robot are reduced.
The inventors of the present invention found that, although the total autonomy is a large direction, the accuracy of the current SLAM technique is still not high enough. In view of the above situation, the present invention provides a visual positioning method and system for visual editing, wherein by creating a visual editing interface and displaying a key frame or characteristic information thereof, the key frame can be viewed, selected and edited on the visual editing interface, so that a user who is not proficient in programming language can also intervene in the process of scene construction and positioning, and the operation threshold is low; and when the environment changes, small parts of key frames which cannot be identified by the SLAM technology can be timely adjusted on a visual editing interface, so that the accuracy of scene construction and positioning is greatly improved, and the construction efficiency is improved, thereby being suitable for complex and changeable environments.
Fig. 1 illustrates a visual positioning method of visual editing according to an example embodiment of the present application. As shown in fig. 1, the visual positioning method includes the following steps:
in step 101, a key frame of a scene image and feature information of the key frame are acquired, the feature information including position information of the key frame. In this step, it can be understood that the mobile robot may adopt the two-dimensional image provided by the vision sensor as a key frame of the scene image and send the key frame information to the map processor. The location information of the key frame can be obtained by feature point matching with the previously saved adjacent key frame or directly obtained by the motion track. Since the key frames and the manner of acquiring the feature information of the key frames may adopt various known manners, details thereof are not repeated herein.
Thereafter, step 102 is entered, a visual editing interface is created, and icons representing key frames are displayed on the visual editing interface according to the position information of the key frames. The visual editing interface may be associated with the location of the entire scene in any suitable manner and the corresponding keyframes displayed on the visual editing interface. In one embodiment, the visual editing interface corresponds one-to-one to the location of the entire scene. In one embodiment, the visual editing interface may be divided into multiple regions, each corresponding to a different portion of the entire scene, e.g., a first region of the visual editing interface corresponding to an aisle portion and a second region corresponding to a shelving portion … …. By flexibly setting the visual editing interface and displaying the key frame, the operation can be facilitated. In other embodiments, the visual editing interface may be associated with the location of the entire scene in other ways as well.
The icons representing the key frames can be arranged in any form according to specific conditions (such as the size of a visual editing interface and the condition of scene layout), for example, the icons can be arranged in a square shape, a circular shape or other shapes so as to be convenient for an operator to view and operate. In one embodiment, when the visual editing interface is divided into a plurality of different areas, different forms of icons can be set for the key frames in the different areas.
Step 103 is then entered, and upon detecting selection of an icon representing a key frame, the key frame and/or feature information of the key frame is displayed for editing. In one embodiment, editing the keyframes and/or feature information of the keyframes includes adding, moving, and/or removing the keyframes, and/or modifying the feature information of the keyframes, wherein the feature information includes coordinates of feature points, descriptors of feature points, bag-of-word values of feature points, and/or other information. The key frames and/or their characteristic information may be selected and edited by any known input means.
Thereafter, step 104 is entered to construct a visual positioning map by using the edited key frames and/or feature information of the key frames.
Under the condition that the visual positioning method is the SLAM method, flexible implementation of a visual editing interface is provided, so that the identification accuracy rate can be ensured to adapt to a complex and changeable environment while the mobile robot has certain autonomy.
In various embodiments of the present application, each key frame may be analyzed for easy viewing and editing.
In one embodiment, in step 102, when certain information (e.g., position information) of a plurality of key frames is the same, the plurality of key frames may be combined to form a key frame list and icons representing the key frame list may be displayed on the visual editing interface. The icons representing the key frame list may be a list graphic, a stack of key frame icons, or other forms.
In the present embodiment, accordingly, in step 103, after detecting the selection of the icon representing the key frame list, a plurality of key frames and/or feature information of the plurality of key frames in the key frame list are displayed for editing. In one embodiment, multiple key frames and/or feature information of multiple key frames may be displayed directly for editing after clicking on the key frame list. In one embodiment, after the key frame list is clicked, icons of a plurality of key frames can be displayed for further clicking, and then the clicked key frame and/or characteristic information thereof is displayed. In other embodiments, the plurality of key frames in the key frame list and the feature information of the plurality of key frames may be displayed in other manners.
A plurality of key frames with the same information form a key frame list, so that batch processing can be conveniently carried out. In one embodiment, in the case where icons representing key frames are displayed according to position information, a plurality of key frames having the same position information are combined to form a key frame list. However, it will be understood by those skilled in the art that in other embodiments of the present application, other key frames with the same information may be combined to form a list to facilitate batch processing.
And further comparing the plurality of key frames, and screening out suspicious frames. FIG. 2 shows a flow diagram of screening for outlier frames according to an example embodiment of the present application. As shown in fig. 2, the visual positioning method may further include the following steps:
in step 201, the feature information of each key frame in the key frame list is compared with the feature information of other adjacent key frames in the key frame list. Phase contrast may be performed in any known manner.
Then, step 202 is performed, and according to the comparison result, the key frame with the highest similarity to the feature information of each key frame is obtained.
Step 203 is then entered to determine whether the similarity difference between each key frame and the key frame with the highest similarity exceeds a first preset threshold. If yes, go to step 204; if not, returning to step 201 to continue processing the next key frame until all key frames in the key frame list are processed. The first preset threshold may be set as required.
In step 204, the keyframes with similarity difference exceeding the first preset threshold with the keyframes with the highest similarity are labeled as abnormal frames. If the similarity of a key frame to all other key frames is not high enough, the key frame may not belong to the location or the environment may have changed.
For example, the key frame list includes key frame a, key frame B, and key frame C. The similarity difference between key frame a and key frame B is S1, the similarity difference between key frame B and key frame C is S2, and S2< S1, then key frame C can be considered as the key frame with the highest similarity to key frame B. Then, comparing S2 with a first preset threshold T1, if S2> T1, the similarity between key frame B and key frame C is not high enough to draw attention, and then key frame B is marked as an abnormal frame. In other embodiments, feature information other than the similarity may be used for filtering or auxiliary filtering, and may be set according to a specific scene.
After the abnormal frames are screened out, the abnormal frames can be gathered to form a list and displayed when needing to be viewed. FIG. 3 shows a flow diagram for displaying an exception frame according to an example embodiment of the present application. As shown in fig. 3, the visual positioning method may further include the following steps:
in step 301, an abnormal frame list is provided on a visual editing interface, wherein the abnormal frame list comprises one or more abnormal frames. When an abnormal frame exists, an abnormal frame list including the abnormal frame may be displayed on either side of the visual editing interface. In one embodiment, the abnormal information of each abnormal frame, such as how much the similarity exceeds, may also be displayed in the abnormal frame list for reference.
Step 302 is thereafter entered, and upon detecting selection of the abnormal frame list, icons representing the abnormal frame-containing key frame list are displayed on the visual editing interface in a different manner from the abnormal frame-free key frame list. In one embodiment, icons representing a list of keyframes that contain outlier frames and icons representing a list of keyframes that do not contain outlier frames are displayed in different colors on a visual editing interface. Further, the icon representing the abnormal frame may be displayed in a different manner from the non-abnormal frame upon detection of selection of the key frame list icon representing the frame containing the abnormal frame. In one embodiment, icons representing a list of keyframes that contain outlier frames and icons representing a list of keyframes that do not contain outlier frames are displayed in different lines or graphics on the visual editing interface. In another embodiment, when an abnormal frame is otherwise determined and the determined abnormal frame is not in a certain key frame list, an icon representing the abnormal frame is directly displayed in a different manner from the non-abnormal frame. In other embodiments, icons representing a list of key frames containing an abnormal frame and icons representing a list of key frames not containing an abnormal frame may also be displayed in other different manners.
Displaying the icon representing that the abnormal frame is contained and the icon representing that the abnormal frame is not contained in different ways can facilitate viewing and editing. For example, when the abnormal information of a plurality of abnormal frames is viewed at the same time, there can be a rough understanding of the overall situation, such as from which key frame an error occurred, etc.
In one embodiment, the feature information may further include displacement and deflection information of the key frame and the neighboring key frame, and the key frame and the neighboring key frame may be associated by the displacement and deflection information. FIG. 4 shows a flow diagram of linking key frames according to an example embodiment of the present application. As shown in fig. 4, the following steps may be further included after step 102:
in step 401, a line connecting an icon representing a key frame and an icon representing an adjacent key frame is displayed on the visual editing interface, thereby linking the key frame and the adjacent key frame.
Step 402 is then entered, and upon detecting selection of a link, the displacement and deflection information of the key frame and the adjacent key frame are displayed for editing. In one embodiment, editing of the keyframes and/or feature information of the keyframes includes adding, moving, and/or removing links between the keyframes and adjacent keyframes, and/or modifying displacement and deflection information of the keyframes and adjacent keyframes. The operation may be performed by any known input means.
Thereafter, step 403 is performed to construct the visual positioning map by the edited displacement and deflection information of the key frame and the adjacent key frame.
In other embodiments, the key frame may be concatenated with other information of neighboring key frames as desired, and is not limited to the displacement and deflection information described above. By connecting a key frame with an adjacent key frame, the relationship between two key frames can be conveniently viewed and adjusted.
In one embodiment, in a case that a key frame is connected to an adjacent key frame through a connecting line, when feature information of the key frame is edited, the feature information of the adjacent key frame may be edited correspondingly according to the connection information of the key frame and the adjacent key frame, for example, according to displacement and deflection information of the key frame and the adjacent key frame.
For example, when the position information of a key frame is modified, the position information of an adjacent key frame is correspondingly modified according to the displacement information of the key frame and the adjacent key frame; when the angle information of the key frame is modified, the angle information of the adjacent key frame is correspondingly modified according to the deflection information of the key frame and the adjacent key frame. The key frame is connected with the adjacent key frame, so that the associated editing can be conveniently carried out.
By further comparing the key frame with the adjacent key frames, the suspicious connection line can be screened out. FIG. 5 shows a flow diagram of screening for outlier connections according to an example embodiment of the present application. As shown in fig. 5, the visual positioning method may further include the following steps:
in step 501, it is determined whether the number of feature point matching pairs between a key frame and an adjacent key frame exceeds a second preset threshold. If not, go to step 502; if so, return to step 501 to continue processing the next group of key frames until all group key frames have been processed. The second preset threshold may be set as needed. The calculation of the number of feature point matching pairs between a key frame and an adjacent key frame is a known calculation method, and is not described herein again.
In step 502, the connection lines between the keyframes whose number of matching pairs does not exceed the second preset threshold and the neighboring keyframes are marked as abnormal connection lines. And screening out the connecting lines with the number of the matching pairs of the characteristic points smaller than the second preset threshold value and calibrating the connecting lines as abnormal connecting lines, so that the user can conveniently check the abnormal connecting lines. If the number of feature point matching pairs of a key frame and a neighboring key frame is not sufficient, the neighboring relationship may be problematic or the environment may change, requiring attention.
In other embodiments, when the key frame is connected with other information of the adjacent key frame, the abnormal connection line is screened by the other information.
After the abnormal connecting lines are screened out, the abnormal connecting lines can be gathered to form a list and displayed when needing to be checked. FIG. 6 illustrates a flow chart showing an abnormal connection according to an example embodiment of the present application. As shown in fig. 6, the visual positioning method may further include the following steps:
in step 601, an abnormal connecting line list is provided on a visual editing interface, wherein the abnormal connecting line list comprises one or more abnormal connecting lines. When an abnormal wire exists, a list of abnormal wires including the abnormal wire may be displayed on either side of the visual editing interface. In an embodiment, the abnormal information of each abnormal connection line, such as how much the number of the matching pairs of the feature points is different, may also be displayed in the abnormal connection line list for reference.
Step 602 is then entered, and after detecting the selection of the abnormal link list, one or more abnormal links are displayed on the visual editing interface in a different manner from the non-abnormal links. In one embodiment, the abnormal and non-abnormal links are displayed in different colors on the visual editing interface. In one embodiment, the abnormal connecting line and the non-abnormal connecting line are displayed in different line shapes on the visual editing interface. In other embodiments, the abnormal and non-abnormal lines may be displayed in different manners, such as bold.
Abnormal connecting lines and non-abnormal connecting lines are displayed in different modes, and the abnormal connecting lines and the non-abnormal connecting lines can be conveniently checked and edited. For example, when looking at the exception information of multiple exception links at the same time, there can be a general understanding of the overall situation, e.g., from which link an error occurred, etc.
The above lists example flows of analysis processing for key frames, which may be used alone or in combination. It is understood that other analysis processes may be performed on the key frames to facilitate viewing and editing.
The method embodiments of the present invention may be implemented in software, hardware, firmware, etc. Whether the present invention is implemented as software, hardware, or firmware, the instruction code may be stored in any type of computer-accessible memory (e.g., permanent or modifiable, volatile or non-volatile, solid or non-solid, fixed or removable media, etc.). Also, the Memory may be, for example, Programmable Array Logic (PAL), Random Access Memory (RAM), Programmable Read Only Memory (PROM), Read-Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), a magnetic disk, an optical disk, a Digital Versatile Disk (DVD), or the like.
Fig. 7 shows a visual positioning system for visual editing according to an example embodiment of the present application, which may implement the visual positioning method described above. As shown in fig. 7, the visual positioning system includes a first acquisition module, a creation module, a detection module, a display module, and a construction module.
The first acquisition module is used for acquiring a key frame of a scene image and characteristic information of the key frame, wherein the characteristic information comprises position information of the key frame. The first acquisition module may be any device capable of acquiring an image of a scene.
The creating module is used for creating a visual editing interface and displaying icons representing key frames on the visual editing interface according to the position information of the key frames acquired by the first acquiring module. Icons representing key frames may be displayed and set in the manner described above.
The detection module is to detect selection of an icon representing a key frame. When the detection module detects selection of an icon representing a key frame, the display module is used for displaying the key frame and/or feature information of the key frame for editing. In one embodiment, the above-mentioned visual positioning system may further comprise an editing module to edit the key frames and the feature information of the key frames, and the editing module may be implemented by any input device. In one embodiment, the editing module adds, moves, and/or removes key frames and/or modifies feature information of key frames. In one embodiment, the feature information includes coordinates of the feature points, descriptors of the feature points, bag-of-words values of the feature points, and/or other information. In other embodiments, the editing module may perform other operations on the key frames as desired.
If the key frame or the feature information of the key frame is edited, the construction module is used for constructing the visual positioning map through the edited key frame and the feature information of the key frame.
Under the condition that the visual positioning system is an SLAM system, the flexible implementation of a visual editing interface is provided, so that the identification accuracy rate is ensured to adapt to a complex and changeable environment while the mobile robot has certain autonomy.
In various embodiments of the present application, each module may analyze and process each key frame to facilitate viewing and editing.
In one embodiment, the creation module may be configured to form a key frame list from a plurality of key frames having the same position information and display an icon representing the key frame list on the visual editing interface. The detection module is to detect selection of an icon representing a list of keyframes. When the detection module detects the selection of the icon representing the key frame list, the display module is used for displaying a plurality of key frames in the key frame list and/or the characteristic information of the plurality of key frames for editing. A plurality of key frames with the same position information form a key frame list, and batch processing can be conveniently carried out.
It is understood that in other embodiments of the present application, other key frames with the same information may be merged and tabulated into a list to facilitate batch processing.
The visual positioning system may further include an abnormal frame screening subsystem, and fig. 8 shows a block diagram of the abnormal frame screening subsystem according to an example embodiment of the present application. As shown in fig. 8, the abnormal frame screening subsystem may include a comparison module, a second obtaining module, a first determining module, and a first calibrating module.
The comparison module is used for comparing the similarity between the characteristic information of each key frame in the key frame list and the characteristic information of other adjacent key frames in the key frame list. And the second acquisition module is used for acquiring the key frame with the highest similarity to the characteristic information of each key frame according to the comparison result of the comparison module.
The first judging module is used for judging whether the similarity difference between each key frame and the key frame with the highest similarity exceeds a first preset threshold value. The first calibration module is used for calibrating the key frames which are determined by the first judgment module and have similarity difference with the key frames with the highest similarity exceeding a first preset threshold value as abnormal frames. And screening out the key frames with similarity difference with other key frames in the key frame list exceeding a first preset threshold value, and calibrating the key frames as abnormal frames, so that the user can conveniently check the abnormal frames.
In other embodiments, feature information other than the similarity may be used for filtering or auxiliary filtering, and may be set according to a specific scene.
In one embodiment, when an abnormal frame exists, the creating module may be further configured to provide an abnormal frame list on the visual editing interface, where the abnormal frame list includes one or more abnormal frames calibrated by the first calibrating module. The creation module may display an exception frame list including exception frames on either side of the visual editing interface. In one embodiment, the creation module may display the abnormality information of each abnormal frame, such as how much the similarity exceeds, etc., in the abnormal frame list for reference.
The detection module is to detect a selection of the abnormal frame list. When the detection module detects the selection of the abnormal frame list, the display module is used for displaying icons representing the key frame list containing the abnormal frames on the visual editing interface in a mode different from the key frame list not containing the abnormal frames. Icons representing a list of key frames containing an abnormal frame and icons representing a list of key frames not containing an abnormal frame may be displayed in the manner as described above, thereby facilitating viewing and editing. When the abnormal information of a plurality of abnormal frames is viewed at the same time, the general situation can be roughly known.
In one embodiment, a key frame may be associatively connected with an adjacent key frame. For example, the feature information may further include displacement and deflection information of the key frame and the neighboring key frame, and the key frame and the neighboring key frame may be associated and connected by the displacement and deflection information.
The creation module may be operative to display a line connecting an icon representing a key frame with an icon representing an adjacent key frame on the visual editing interface.
The detection module is used for detecting the selection of the connection. When the detection module detects the selection of the connection line, the display module is used for displaying the displacement and deflection information of the key frame and the adjacent key frame for editing. In one embodiment, the visual positioning system may further comprise an editing module to edit the displacement and deflection information of a key frame from neighboring key frames. In one embodiment, the editing module adds, moves and/or removes links of key frames to adjacent key frames and/or modifies displacement and deflection information of key frames to adjacent key frames. In another embodiment, when a key frame is connected to an adjacent key frame through a connecting line, the editing module may edit the feature information of the key frame and correspondingly edit the feature information of the adjacent key frame according to the connection information of the key frame and the adjacent key frame, for example, according to the displacement and deflection information of the key frame and the adjacent key frame.
If the displacement and deflection information of the key frame and the adjacent key frame is edited, the construction module is used for constructing the visual positioning map according to the edited displacement and deflection information of the key frame and the adjacent key frame.
In other embodiments, the key frame may be concatenated with other information of neighboring key frames as desired, and is not limited to the displacement and deflection information described above. By connecting a key frame with an adjacent key frame, the relationship between two key frames can be conveniently viewed and adjusted.
The visual localization system may further include an abnormal wiring screening subsystem, and fig. 9 shows a block diagram of the abnormal wiring screening subsystem according to an example embodiment of the present application. As shown in fig. 9, the abnormal connection screening subsystem may include a second determining module and a second calibrating module.
The second judging module is used for judging whether the number of the feature point matching pairs of the key frame and the adjacent key frame exceeds a second preset threshold value. The second calibration module is used for calibrating the connecting lines of the key frames and the adjacent key frames, the number of the matching pairs of the feature points of which is determined by the second judgment module does not exceed a second preset threshold value, as abnormal connecting lines. And screening out the connecting lines with the number of the matching pairs of the characteristic points smaller than the second preset threshold value and calibrating the connecting lines as abnormal connecting lines, so that the user can conveniently check the abnormal connecting lines.
In other embodiments, when the keyframes are concatenated with other information of neighboring keyframes, the abnormal connection filtering subsystem may filter the abnormal connection through other information.
In an embodiment, when an abnormal connection exists, the creating module may be further configured to provide an abnormal connection list on the visual editing interface, where the abnormal connection list includes one or more abnormal connections calibrated by the second calibrating module. The creation module may display a list of abnormal links including abnormal links on either side of the visual editing interface. In one embodiment, the creating module may display the abnormal information of each abnormal connecting line, such as how much the number of the matching pairs of the feature points is different, in the abnormal connecting line list, for reference.
The detection module is used for detecting the selection of the abnormal connecting line list. When the detection module detects the selection of the abnormal connecting line list, the display module is used for displaying one or more abnormal connecting lines on the visual editing interface in a mode different from the non-abnormal connecting lines. The abnormal and non-abnormal lines may be displayed in the manner described above to facilitate viewing and editing. When the abnormal information of a plurality of abnormal connecting lines is simultaneously checked, the general situation can be roughly known.
The modules that perform the analysis process on the key frames are exemplarily listed above, but it is understood that the modules may perform other analysis processes on the key frames to facilitate viewing and editing.
FIG. 10 illustrates a visual editing interface according to an exemplary embodiment of the present application. As shown in fig. 10, each circular icon represents a key frame, and adjacent circular icons are connected by a connecting line. When a circle icon, for example, the fourth circle icon on the left side, is selected, the key frame and the feature information of the key frame, for example, attribute 1, attribute 2, attribute 3 …, are displayed on the visual editing interface; when a link is selected, for example, the left side penultimate link, displacement deflection information is displayed on the visual editing interface.
In the visual editing interface shown in fig. 10, when a plurality of key frames form a key frame list, the key frame list is represented by icons of large circles and small circles. Displaying an abnormal frame list and an abnormal connecting line list on the right side of the visual editing interface, when the abnormal frame list is selected, displaying a corresponding abnormal frame (such as an abnormal frame 1-2) as a black circle to be distinguished from a non-abnormal frame, and displaying a small circle in an icon of a key frame list containing the abnormal frame (such as an abnormal frame 3) as a black circle to be distinguished from an icon of the key frame list not containing the abnormal frame; when the exception link list is selected, the corresponding exception link (e.g., exception links 1-3) is bolded to distinguish it from non-exception links. By highlighting the abnormal frames, the key frame list containing the abnormal frames and the abnormal connecting lines on the visual editing interface, the viewing and editing can be conveniently carried out.
It is to be understood that the visual editing interface of fig. 10 is merely an example. In other embodiments of the present application, other forms of visual editing interfaces may be provided.
It should be noted that, in the embodiments of the system of the present invention, each unit and/or module mentioned is a logical unit and/or module, and physically, a logical unit and/or module may be a physical unit and/or module, or a part of a physical unit and/or module, and may also be implemented as a combination of multiple physical units and/or modules, where the physical implementation manner of the logical unit and/or module itself is not the most important, and the combination of the functions implemented by the logical unit and/or module is the key to solve the technical problem provided by the present invention. Furthermore, in order to highlight the innovative part of the present invention, the above-mentioned system embodiments of the present invention do not introduce elements and/or modules which are not so closely related to solve the technical problems proposed by the present invention, which does not indicate that there are no other elements and/or modules in the above-mentioned system embodiments.
It is to be noted that in the claims and the description of the present application, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the use of the verb "comprise a" to define an element does not exclude the presence of another, same element in a process, method, article, or apparatus that comprises the element
All documents referred to herein are incorporated by reference into this application as if each were individually incorporated by reference. Furthermore, it should be understood that various changes and modifications of the present invention can be made by those skilled in the art after reading the above teachings of the present invention, and these equivalents also fall within the scope of the present invention as defined by the appended claims.

Claims (22)

1. A visual positioning method for visual editing is characterized by comprising the following steps:
acquiring a key frame of a scene image and feature information of the key frame, wherein the feature information comprises position information of the key frame;
creating a visual editing interface, and displaying icons representing the key frames on the visual editing interface according to the position information of the key frames;
upon detecting selection of an icon representing the key frame, displaying the key frame and/or feature information of the key frame for editing; and
and constructing a visual positioning map through the edited key frames and/or the characteristic information of the key frames.
2. A visual positioning method for visual editing according to claim 1 and wherein said visual positioning method is an instant positioning and mapping method.
3. A visual positioning method for visual editing according to claim 1, wherein in the step of displaying icons representing the key frames on the visual editing interface according to the position information of the key frames, a plurality of key frames with the same position information are formed into a key frame list and icons representing the key frame list are displayed on the visual editing interface;
in the step of displaying the key frames and/or the feature information of the key frames for editing after detecting selection of the icons representing the key frames, the feature information of a plurality of key frames and/or a plurality of key frames in the key frame list is displayed for editing after detecting selection of the icons representing the key frame list.
4. A visual positioning method for visual editing according to claim 3, further comprising the steps of:
comparing the similarity of the characteristic information of each key frame in the key frame list with the characteristic information of other adjacent key frames in the key frame list;
acquiring a key frame with the highest similarity to the characteristic information of each key frame according to the comparison result;
judging whether the similarity difference between each key frame and the key frame with the highest similarity exceeds a first preset threshold value or not; and
and calibrating the key frames with the similarity difference exceeding the first preset threshold with the key frames with the highest similarity as abnormal frames.
5. A visual positioning method for visual editing according to claim 4, further comprising the steps of:
providing an abnormal frame list on the visual editing interface, wherein the abnormal frame list comprises one or more abnormal frames; and
upon detecting selection of the abnormal frame list, displaying an icon representing a list of keyframes containing abnormal frames on the visual editing interface in a different manner than a list of keyframes not containing abnormal frames.
6. A visual positioning method for visual editing according to claim 1 wherein said feature information further comprises displacement and deflection information of said key frame from adjacent key frames;
the method further comprises the following steps after the step of displaying the icon representing the key frame on the visual editing interface according to the position information of the key frame:
displaying a connecting line connecting the icon representing the key frame and the icon representing the adjacent key frame on the visual editing interface;
displaying displacement and deflection information of the key frame and the adjacent key frame for editing after detecting the selection of the connection line; and
and constructing the visual positioning map through the edited displacement and deflection information of the key frame and the adjacent key frame.
7. A visual positioning method for visual editing according to claim 6, further comprising the steps of:
judging whether the number of the feature point matching pairs of the key frame and the adjacent key frame exceeds a second preset threshold value or not; and
and calibrating the connecting lines of the key frames and the adjacent key frames of which the number of the feature point matching pairs does not exceed the second preset threshold value as abnormal connecting lines.
8. A visual positioning method for visual editing according to claim 7 and also comprising the steps of:
providing an abnormal connecting line list on the visual editing interface, wherein the abnormal connecting line list comprises one or more abnormal connecting lines; and
after the selection of the abnormal connecting line list is detected, one or more abnormal connecting lines are displayed on the visual editing interface in a mode different from the non-abnormal connecting lines.
9. The visual positioning method for visual editing of claim 6, wherein after the step of displaying the key frame and/or the feature information of the key frame for editing after detecting the selection of the icon representing the key frame further comprises the step of editing the feature information of the key frame correspondingly according to the displacement and deflection information of the key frame and the adjacent key frame.
10. Visual positioning method for visual editing according to any of claims 1 to 9, where editing of the keyframes and/or their feature information comprises adding, moving and/or removing the keyframes and/or modifying the feature information of the keyframes, where the feature information comprises the coordinates of feature points, descriptors of feature points and/or bag-of-word values of feature points.
11. Visual positioning method for visual editing according to any of claims 6 to 9, where editing the keyframes and/or the feature information of the keyframes comprises adding, moving and/or removing links of the keyframes with neighboring keyframes and/or modifying displacement and deflection information of the keyframes with neighboring keyframes.
12. A visual positioning system for visual editing, the visual positioning system comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a key frame of a scene image and feature information of the key frame, and the feature information comprises position information of the key frame;
the creating module is used for creating a visual editing interface and displaying icons representing the key frames on the visual editing interface according to the position information of the key frames acquired by the first acquiring module;
a detection module to detect selection of an icon representing the key frame;
a display module, configured to display the key frame and/or feature information of the key frame for editing after the detection module detects selection of an icon representing the key frame; and
and the construction module is used for constructing the visual positioning map through the edited key frames and/or the feature information of the key frames.
13. A visual positioning system for visual editing according to claim 12 and wherein said visual positioning system is an instant positioning and mapping system.
14. A visual positioning system for visual editing according to claim 12 wherein said creation module is further configured to form a list of keyframes from a plurality of keyframes having identical position information and display icons representing said list of keyframes on said visual editing interface;
the detection module is further to detect selection of an icon representing the list of keyframes;
the display module is further configured to display the plurality of key frames and/or the feature information of the plurality of key frames in the key frame list for editing after the detection module detects selection of the icon representing the key frame list.
15. A visual localization system for visual editing according to claim 14, further comprising:
the comparison module is used for comparing the similarity of the characteristic information of each key frame in the key frame list with the characteristic information of other adjacent key frames in the key frame list;
the second acquisition module is used for acquiring the key frame with the highest similarity to the characteristic information of each key frame according to the comparison result of the comparison module;
the first judgment module is used for judging whether the similarity difference between each key frame and the key frame with the highest similarity exceeds a first preset threshold value or not; and
and the first calibration module is used for calibrating the key frame which is determined by the first judgment module and has the similarity difference with the key frame with the highest similarity exceeding the first preset threshold value as an abnormal frame.
16. A visually edited visual positioning system according to claim 15 wherein said creation module is further configured to provide a list of abnormal frames on said visual editing interface, said list of abnormal frames including one or more of said abnormal frames targeted by said first targeting module;
the detection module is further configured to detect a selection of the abnormal frame list;
the display module is further configured to display an icon representing the list of keyframes containing abnormal frames on the visual editing interface in a different manner from the list of keyframes not containing abnormal frames after the detection module detects the selection of the list of abnormal frames.
17. A visual positioning system for visual editing according to claim 12 wherein said feature information further comprises displacement and deflection information of said key frame from adjacent key frames;
the creation module is further used for displaying a connecting line connecting the icon representing the key frame and the icon representing the adjacent key frame on the visual editing interface;
the detection module is further configured to detect a selection of the connection;
the display module is further configured to display displacement and deflection information of the key frame and the adjacent key frame for editing after the detection module detects the selection of the link;
the construction module is further used for constructing the visual positioning map through the edited displacement and deflection information of the key frame and the adjacent key frame.
18. A visual localization system for visual editing according to claim 17, further comprising:
the second judging module is used for judging whether the number of the feature point matching pairs of the key frame and the adjacent key frame exceeds a second preset threshold value or not; and
and the second calibration module is used for calibrating the connecting line between the key frame and the adjacent key frame, the number of the feature point matching pairs of which is determined by the second judgment module does not exceed the second preset threshold value, as an abnormal connecting line.
19. A visually edited visual location system according to claim 18 wherein said creation module is further configured to provide a list of abnormal connections on said visual editing interface, said list of abnormal connections including one or more of said abnormal connections targeted by said second targeting module;
the detection module is further configured to detect a selection of the abnormal connection list;
the display module is further configured to display one or more abnormal connecting lines on the visual editing interface in a manner different from non-abnormal connecting lines after the detection module detects the selection of the abnormal connecting line list.
20. A visual positioning system for visual editing as claimed in claim 17 wherein said visual positioning system further comprises an editing module for editing feature information of said keyframes while editing said feature information of said keyframes, and editing feature information of said neighboring keyframes accordingly based on displacement and deflection information of said keyframes and said neighboring keyframes.
21. A visual positioning system for visual editing according to any one of claims 12 to 20, where the visual positioning system further comprises an editing module for adding, moving and/or removing the keyframes and/or modifying feature information of the keyframes, where the feature information comprises coordinates of feature points, descriptors of feature points and/or bag-of-words values of feature points.
22. A visual positioning system for visual editing according to any of claims 17 to 20, where the visual positioning system further comprises an editing module for adding, moving and/or removing links of the keyframes to neighboring keyframes and/or modifying displacement and deflection information of the keyframes from neighboring keyframes.
CN201811189739.6A 2018-10-12 2018-10-12 Visual positioning method and system for visual editing Active CN111046698B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811189739.6A CN111046698B (en) 2018-10-12 2018-10-12 Visual positioning method and system for visual editing
PCT/CN2019/108294 WO2020073818A1 (en) 2018-10-12 2019-09-26 Visual localization method and system for visual editing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811189739.6A CN111046698B (en) 2018-10-12 2018-10-12 Visual positioning method and system for visual editing

Publications (2)

Publication Number Publication Date
CN111046698A true CN111046698A (en) 2020-04-21
CN111046698B CN111046698B (en) 2023-06-20

Family

ID=70164154

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811189739.6A Active CN111046698B (en) 2018-10-12 2018-10-12 Visual positioning method and system for visual editing

Country Status (2)

Country Link
CN (1) CN111046698B (en)
WO (1) WO2020073818A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112817789A (en) * 2021-02-23 2021-05-18 浙江大华技术股份有限公司 Modeling method and device based on browser transmission

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105247573A (en) * 2013-06-11 2016-01-13 高通股份有限公司 Interactive and automatic 3-d object scanning method for the purpose of database creation
US20160217329A1 (en) * 2015-01-22 2016-07-28 Electronics And Telecommunications Research Institute Apparatus and method for providing primitive visual knowledge
CN107301402A (en) * 2017-06-30 2017-10-27 锐捷网络股份有限公司 A kind of determination method, device, medium and the equipment of reality scene key frame
CN107369183A (en) * 2017-07-17 2017-11-21 广东工业大学 Towards the MAR Tracing Registration method and system based on figure optimization SLAM
CN108267121A (en) * 2018-01-24 2018-07-10 锥能机器人(上海)有限公司 The vision navigation method and system of more equipment under a kind of variable scene
CN108375979A (en) * 2018-02-10 2018-08-07 浙江工业大学 Self-navigation robot general-purpose control system based on ROS
CN108648274A (en) * 2018-05-10 2018-10-12 华南理工大学 A kind of cognition point cloud map creation system of vision SLAM
CN108648270A (en) * 2018-05-12 2018-10-12 西北工业大学 Unmanned plane real-time three-dimensional scene reconstruction method based on EG-SLAM

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11397088B2 (en) * 2016-09-09 2022-07-26 Nanyang Technological University Simultaneous localization and mapping methods and apparatus
CN107194991B (en) * 2017-05-17 2021-01-01 西南科技大学 Three-dimensional global visual monitoring system construction method based on skeleton point local dynamic update

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105247573A (en) * 2013-06-11 2016-01-13 高通股份有限公司 Interactive and automatic 3-d object scanning method for the purpose of database creation
US20160217329A1 (en) * 2015-01-22 2016-07-28 Electronics And Telecommunications Research Institute Apparatus and method for providing primitive visual knowledge
CN107301402A (en) * 2017-06-30 2017-10-27 锐捷网络股份有限公司 A kind of determination method, device, medium and the equipment of reality scene key frame
CN107369183A (en) * 2017-07-17 2017-11-21 广东工业大学 Towards the MAR Tracing Registration method and system based on figure optimization SLAM
CN108267121A (en) * 2018-01-24 2018-07-10 锥能机器人(上海)有限公司 The vision navigation method and system of more equipment under a kind of variable scene
CN108375979A (en) * 2018-02-10 2018-08-07 浙江工业大学 Self-navigation robot general-purpose control system based on ROS
CN108648274A (en) * 2018-05-10 2018-10-12 华南理工大学 A kind of cognition point cloud map creation system of vision SLAM
CN108648270A (en) * 2018-05-12 2018-10-12 西北工业大学 Unmanned plane real-time three-dimensional scene reconstruction method based on EG-SLAM

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112817789A (en) * 2021-02-23 2021-05-18 浙江大华技术股份有限公司 Modeling method and device based on browser transmission
CN112817789B (en) * 2021-02-23 2023-01-31 浙江大华技术股份有限公司 Modeling method and device based on browser transmission

Also Published As

Publication number Publication date
CN111046698B (en) 2023-06-20
WO2020073818A1 (en) 2020-04-16

Similar Documents

Publication Publication Date Title
US11645851B2 (en) Method of processing image data in a connectionist network
CN113239818B (en) Table cross-modal information extraction method based on segmentation and graph convolution neural network
Tarsha Kurdi et al. Automatic filtering and 2D modeling of airborne laser scanning building point cloud
CN105678737B (en) A kind of digital picture angular-point detection method based on Radon transformation
JP2014215301A (en) Method of detecting edge
EP4036801A1 (en) Fish counting system, fish counting method, and program
CN108121648B (en) Interface error monitoring method
KR20220069929A (en) Pick-up robot, pick-up method and computer-readable storage medium
CN111221996A (en) Instrument screen visual detection method and system
CN114359383A (en) Image positioning method, device, equipment and storage medium
CN111046698A (en) Visual positioning method and system for visual editing
Tu et al. An efficient crop row detection method for agriculture robots
US7929024B2 (en) Program creation apparatus for image processing controller
CN111444834B (en) Image text line detection method, device, equipment and storage medium
CN112434582A (en) Lane line color identification method and system, electronic device and storage medium
CN111754467A (en) Hough transform-based parking space detection method and device, computer equipment and storage medium
CN109977740B (en) Depth map-based hand tracking method
CN114067097A (en) Image blocking target detection method, system and medium based on deep learning
CN113050063A (en) Obstacle detection method, device and equipment based on laser sensor and storage medium
CN110766644B (en) Image down-sampling method and device
CN113661519A (en) Method for defining a contour of an object
CN110909668A (en) Target detection method and device, computer readable storage medium and electronic equipment
TWI850783B (en) System and method for anomaly detection of a scene
Warnell et al. Ray saliency: bottom-up visual saliency for a rotating and zooming camera
CN112287951B (en) Data output method, device, medium and computing equipment based on image analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40026925

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant