CN114549978A - Mobile robot operation method and system based on multiple cameras - Google Patents
Mobile robot operation method and system based on multiple cameras Download PDFInfo
- Publication number
- CN114549978A CN114549978A CN202210113040.1A CN202210113040A CN114549978A CN 114549978 A CN114549978 A CN 114549978A CN 202210113040 A CN202210113040 A CN 202210113040A CN 114549978 A CN114549978 A CN 114549978A
- Authority
- CN
- China
- Prior art keywords
- image
- scene
- camera
- target image
- scene image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/66—Analysis of geometric attributes of image moments or centre of gravity
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
A mobile robot operation method based on multiple cameras is characterized in that multiple cameras acquire indoor scene images, coordinate systems are established according to two sides of the scene images, and a scene image area of each camera is set; setting a rectangular boundary area in a scene image, establishing a target image template, matching the target image template with the scene image by using a gray matching algorithm to obtain the center point coordinate and the contour of the target image in the scene image, and determining the coordinate of the target image in a coordinate system according to the center point coordinate and the contour of the target image. According to the multi-camera-based mobile robot operation method, the image template is matched with the scene image, the gray level matching algorithm is used for matching, the matching precision under various conditions can be effectively improved, the accuracy of object positioning is improved, the hardware cost is reduced, the reliability of a positioning system is improved by using a variable and quantitative method, and the center point coordinate and the contour coordinate of the target image are accurately obtained.
Description
Technical Field
The invention relates to the technical field of indoor positioning of floor sweeping robots, in particular to a multi-camera-based mobile robot operation method and a multi-camera-based mobile robot operation system.
Background
The existing sweeping robot is clean indoors, and plays a great role. With the change of the use scene, the sweeping robot needs to be accurately positioned indoors to solve the operations of navigation, cleaning and obstacle avoidance.
However, in image data of the existing sweeping robot, the color of the sweeping robot is easily overlapped with other similar objects of an indoor scene, so that calibration is difficult and accurate tracking cannot be achieved.
The main current method is positioning by comparing static and dynamic objects, the computing power of a computer and the design requirement and cost of a system algorithm are high, and the system is influenced by environment, such as the reflection of a product mirror and the like, and the problem is caused because the improvement of the current hardware leads to the tendency of positioning a moving object by using an algorithm, and the conversion between variable and quantitative is ignored.
Disclosure of Invention
The invention aims to overcome the problems in the prior art and provides an operation method and an operation system for a multi-camera mobile robot, which have high calibration accuracy and low cost.
The technical scheme of the invention is as follows:
a mobile robot running method based on multiple cameras,
the method comprises the following steps that a plurality of cameras acquire indoor scene images, coordinate systems are established according to two sides of the scene images, and a scene image area of each camera is set;
a rectangular bounding region in the scene image is set,
establishing a target image template, matching the target image template with the scene image by using a gray matching algorithm to obtain the coordinates and the outline of the central point of the target image in the scene image,
and determining the coordinates of the target image in a coordinate system according to the coordinates of the central point and the outline of the target image.
The method comprises the following steps of establishing an image shielding area for the scene image after setting a rectangular boundary area in the scene image, wherein the image shielding area comprises
Processing the area outside the rectangular boundary area in the scene image;
the scene image in this region is grayed and then xored with a constant 255.
When the cameras capture images, the image capturing area of each camera is set, and the cameras are appointed according to the image capturing area partition definition of the scene images.
And establishing a coordinate system by taking two sides of the rectangular boundary as axes, and calibrating the coordinates of the position of the target image in the rectangular boundary.
And the scene image of each camera is provided with an independent coordinate system.
And the coordinate system in the scene images of the adjacent cameras is connected with the coordinate system of the scene image of the first camera, and the plurality of cameras form a panoramic coordinate system.
Get for instance downwards through the camera that sets up in indoor top and get for instance and obtain the scene image, the regional whole indoor scene of being covered with of getting for instance of a plurality of cameras.
The scene images of successive cameras are analyzed and the marking robot is lost when the robot leaves a rectangular area in the scene image and does not enter a rectangular area of the next camera.
The image templates comprise a plurality of image templates, and the pixel of each image template is from the pixel of a normally acquired target image to the minimum identification pixel of the target image.
A mobile robot operation system based on multiple cameras comprises
The camera module is used for acquiring an indoor scene image by the camera and establishing a coordinate system;
a template construction module for establishing an image template,
the matching module is used for matching the image template with the scene image by using a gray matching algorithm and carrying out normalization processing on the cross-correlation value to obtain a matching result;
and the calculation module is used for calculating the center point coordinates and the contour coordinates of the target image in the scene image.
Has the advantages that:
a mobile robot operation method based on multiple cameras,
the method comprises the following steps that a plurality of cameras acquire indoor scene images, coordinate systems are established according to two sides of the scene images, and a scene image area of each camera is set;
a rectangular bounding region in the scene image is set,
establishing a target image template, matching the target image template with the scene image by using a gray matching algorithm to obtain the coordinates and the outline of the central point of the target image in the scene image,
and determining the coordinates of the target image in a coordinate system according to the coordinates of the central point and the outline of the target image.
According to the multi-camera-based mobile robot operation method, the image template is matched with the scene image, the gray level matching algorithm is used for matching, the matching precision under various conditions can be effectively improved, the accuracy of object positioning is improved, the hardware cost is reduced, the reliability of a positioning system is improved by using a variable and quantitative method, and the center point coordinate and the contour coordinate of the target image are accurately obtained.
Drawings
FIG. 1 is a flow chart of a target image template of the present invention;
fig. 2 is a schematic diagram of the rectangular boundary, contour and motion trend region of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application d or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings.
Example 1
A mobile robot operation method based on multiple cameras,
the method comprises the following steps that a plurality of cameras acquire indoor scene images, coordinate systems are established according to two sides of the scene images, and a scene image area of each camera is set;
a rectangular bounding region in the scene image is set,
establishing a target image template, matching the target image template with the scene image by using a gray matching algorithm to obtain the coordinates and the outline of the central point of the target image in the scene image,
and determining the coordinates of the target image in a coordinate system according to the coordinates of the central point and the outline of the target image.
The coordinate and the outline of the central point of the target image in the obtained scene image comprise
Matching the scene image in the rectangular boundary area with the target image template, determining the outline of the target image in the scene image,
coordinate points of the rectangular bounding area and the outline are stored,
the scene image of the invention is obtained by the following method:
controlling the cameras to acquire indoor scene images, wherein 1 or more cameras are used for respectively taking images of indoor scenes, when the indoor scenes are large, the indoor scenes need to be taken by the multiple cameras, and the number of the cameras is in proportional relation with the range of the indoor scene images; the control camera acquires an indoor scene image, and the camera arranged at the indoor top downwards shoots to acquire the indoor scene image; and the time for obtaining the indoor scene image is the minimum time for the robot to move away from the motion trend area.
The cameras are respectively provided with an image capturing angle and an image capturing area, so that the cameras can completely cover an indoor scene, wherein the image capturing areas of the cameras are defined, images of cross areas in scene images and appointed cameras are determined, and when the appointed cameras, namely target images are located in each image capturing area, data in the images are calculated according to data of the corresponding cameras.
As shown in fig. 1, the method for creating the target image template of the present invention includes filtering a scene image into a black-and-white photograph through gray processing, extracting a template of the target image, and obtaining an image template.
The pixel size of the target image template includes the smallest identified pixel of the target image to the pixel of the complete target image; the target image templates can be arranged in a plurality of ways, the pixel sizes from the minimum identification pixel to the complete target image can be set, and the success rate of overall matching can be effectively improved during matching.
Of course, the size of the identified pixels can also be adjusted according to the difference of the target images; meanwhile, the position of the target image template is also meaningful, and the target images in the target image template are respectively positioned on two diagonal lines in the scene image.
The objects in the scene image are all expressed through a coordinate system, the coordinate system takes two sides of the scene image as coordinate axes, namely an X axis and a Y axis, each pixel is taken as a coordinate point (X, Y) to form the coordinate system, and the coordinate system is marked for the interested objects in the scene image, so that the scene image can be used quickly.
The coordinate system in the application can be set through the rectangular boundary, namely, a certain point on the boundary of the rectangular boundary is used as an initial element point, two sides are used as coordinate axes, the coordinate system is established, when the scene image needs to be calculated, the scene image in the rectangular boundary can be directly matched, and the image outside the rectangular boundary does not need to be analyzed and compared.
The coordinate system of the invention can be formed by other modes, the coordinate system in the scene images of the adjacent cameras is connected with the coordinate system of the scene image of the first camera, and the scene images of the plurality of cameras form a panoramic coordinate system.
When the scene image of this application is handled, can meet the very low problem of efficiency, this is because the scene image is wide angle image, and it has contained a lot of link noises, for example wall, embellishment etc. its complexity in looking at the room, very big variable appears, leads to discernment efficiency to reduce like this, and the probability of erroneous judgement increases.
As shown in fig. 2, the present invention can utilize a rectangular boundary and a contour to realize fast processing of a scene image, which is to set a rectangular boundary first, exclude a definite part in the scene image, such as a fixed and non-existent object, then match a target image template within the rectangular boundary, find a target image in the scene image, and determine a center coordinate point and a contour coordinate of the target image.
The range of the rectangular boundary can be designed according to needs, the rectangular boundary can not only remove environmental noise, but also be set as a specific identification area, the rectangular boundary can use coordinates to determine an area, and when a scene image needs to be identified, the rectangular boundary is automatically set.
The coordinate system in the application can be set through the rectangular boundary, namely, a certain point on the boundary of the rectangular boundary is used as an initial element point, two sides are used as coordinate axes, the coordinate system is established, when the scene image needs to be calculated, the scene image in the rectangular boundary can be directly matched, and the image outside the rectangular boundary does not need to be analyzed and compared.
Pixels of the scene image outside the rectangular boundary are processed by using an image shading method, and an image shading area is established for the scene image, wherein the image shading area comprises an area except the rectangular boundary area in the processed scene image; performing exclusive or on the scene image in the region after graying with a constant 255; and 255 represents black, and all the places except the box are black.
In addition to the rectangular boundary and the contour, a motion trend region can be set, specifically, matching a scene image with a template formed by a target image refers to comparing the template formed by the image with the scene image, identifying the target image in the scene image, determining the contour, determining the motion trend region according to the contour, wherein the area of the rectangle is larger than that of the target image.
When the target image cannot be found by matching the scene images of the next frame, all the scene images of the next frame are matched, and the contour and the motion trend area are determined again.
The motion trend area comprises a horizontal rectangular trend and a vertical rectangular trend, the horizontal trend area and the rectangular trend area start from the left boundary of the scene image to the right, and the vertical rectangular trend starts from the upper boundary of the scene image to the lower side; the width of the transverse rectangular trend and the width of the longitudinal rectangular trend are larger than the width of the outline.
The motion position of the robot can be predicted by utilizing the motion trend area, the robot is tracked and positioned, and meanwhile, when a scene image is identified, pixel points in the motion trend area can be preferentially matched to match a target image; the method can improve the matching efficiency and achieve a better tracking effect.
The motion trend area realizes tracking and positioning of the target image in the scene image, effectively reduces matching time of the scene image and reduces matching calculation amount.
The scene image of the invention also comprises image preprocessing, wherein the image preprocessing refers to adjusting the following components in the scene image: namely parameters such as red, yellow and blue, hue, saturation, name and brightness, and the scene image is converted into an image with a single parameter. The scene image using this single parameter is matched to the template image. The camera in this application uses the CMOS camera, and its joint uses the interface: USB 2.0; pixel: 200 ten thousand pixel (effective pixels: 120 ten thousand to 150 ten thousand pixels) frame rate: 30, of a nitrogen-containing gas; sensitivity: 120-.
The method comprises the steps of obtaining an initial image of the scene image, comparing the initial image with an actual scene image to obtain a difference, and identifying an obstacle region in the scene image; obtaining an initial image of a scene image, comparing the initial image with an actual scene image to calculate the difference, extracting the corresponding area of the obstacle, and generating an obstacle deletion image; and determining the actual coverage area according to the cleaning area and the obstacle deletion map, and determining the coverage rate according to the actual coverage area.
Determining a connecting line of the coordinate of the central point of the first position and the coordinate of the central point of the second position as a track line according to the first position and the second position; and the coordinate systems of the rectangular boundaries in the two adjacent scene images are connected.
The template formed by matching the target image filters the template picture into a black-and-white photo, and quantifies the black-and-white degree into a numerical value (0-255), and the gray matching method is a calculation method for normalizing the template image and the target image
For example: when a template image T (x, y) with size L × K moves from top to bottom in an N × M target image f (x, y) from left to right (L ≦ M and K ≦ N), the cross-correlation value of the template image with the region in the target image where (i, j) (0 ≦ i ≦ M-1; 0 ≦ j ≦ N-1) is expressed as:
the cross-correlation value is very sensitive to amplitude variations in the picture and template, such as pixel density, so normalization is needed to eliminate the effect of amplitude:
where is the average density value of the pixels in the template w; simply speaking, normalization means that all numerical values are changed into 0 and 1 by setting a threshold value, the recognition speed can be improved after normalization, and correspondingly, the misjudgment rate can be increased, so that the matching success rate is normalized by the method, the matching success rate is lower than 100%, the matching is judged to be unsuccessful, and the matching is judged to be successful only when the matching success rate reaches 100%.
The matching rate can also be increased by a method,
in the invention, the brightness of the indoor illuminating lamp is uniformly set in the environment, so that the matching efficiency can be effectively improved.
Preferably, a marker is superimposed on the target image, wherein the marker is one or more of non-reflective material, single color and pattern symmetry, and the target image is matched more easily.
The final acceptance form of the target detection of the moving object is to output the center of the moving object on a screen and identify the center of the moving object, firstly, the coordinate of the center point of the moving object obtained by matching is output in a matching area, the coordinate of the center point is stored through a data structure cluster, then the values of the coordinates x and y are respectively output to a contour parameter clusters in the ROI through an array and stored, wherein the point needs to be converted into a storage form of the ROI, and the storage form is covered and output to the ROI so as to be output.
Firstly, two parameters are required to be output, namely the number of successfully matched images and the point coordinates of successfully matched images; and inputting the current point coordinates of the moving object only when the number of the successfully matched images is more than zero.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
A mobile robot running system based on multiple cameras comprises a camera module, a camera module and a control module, wherein the camera module is used for acquiring indoor scene images and establishing a coordinate system; the template building module is used for building an image template, the matching module is used for matching the image template with a scene image by using a gray matching algorithm, and normalization processing is carried out on the cross-correlation value to obtain a matching result; and the calculation module is used for calculating the center point coordinates and the contour coordinates of the target image in the scene image.
The present application further provides another embodiment, which is to provide a computer-readable storage medium storing a multi-camera based mobile robot operating program, where the multi-camera based mobile robot operating program is executable by at least one processor to cause the at least one processor to perform the steps of a multi-camera based mobile robot operating method as described above.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields, and all the equivalent structures are within the protection scope of the present application.
Claims (10)
1. A mobile robot operation method based on multiple cameras is characterized in that:
the method comprises the following steps that a plurality of cameras acquire indoor scene images, coordinate systems are established according to two sides of the scene images, and a scene image area of each camera is set;
a rectangular bounding region in the scene image is set,
establishing a target image template, matching the target image template with the scene image by using a gray matching algorithm to obtain the coordinates and the outline of the central point of the target image in the scene image,
and determining the coordinates of the target image in a coordinate system according to the coordinates of the central point and the outline of the target image.
2. The method for operating the multi-camera based mobile robot according to claim 1, wherein: after the setting of the rectangular boundary region in the scene image, establishing an image occlusion region for the scene image, wherein the image occlusion region includes:
processing the area outside the rectangular boundary area in the scene image;
the scene image in this area is grayed and then exclusive-ored with a constant 255.
3. The method for operating the multi-camera based mobile robot according to claim 1, wherein: when the cameras capture images, the image capturing area of each camera is set, and the cameras are appointed according to the image capturing area partition definition of the scene images.
4. The multi-camera based mobile robot operation method according to claim 2, wherein: and establishing a coordinate system by taking two sides of the rectangular boundary as axes, and calibrating the coordinates of the position of the target image in the rectangular boundary.
5. The method for operating the multi-camera based mobile robot according to claim 1, wherein: and the scene image of each camera is provided with an independent coordinate system.
6. The method for operating the multi-camera based mobile robot according to claim 1, wherein: and the coordinate system in the scene images of the adjacent cameras is connected with the coordinate system of the scene image of the first camera, and the plurality of cameras form a panoramic coordinate system.
7. The method for operating the multi-camera based mobile robot according to claim 1, wherein: get for instance downwards through the camera that sets up in indoor top and get for instance and obtain the scene image, the regional whole indoor scene of being covered with of getting for instance of a plurality of cameras.
8. The method for operating the multi-camera based mobile robot according to claim 1, wherein: the scene images of successive cameras are analyzed and the marking robot is lost when the robot leaves a rectangular area in the scene image and does not enter a rectangular area of the next camera.
9. The method for operating the multi-camera based mobile robot according to claim 1, wherein: the image templates comprise a plurality of image templates, and the pixel of each image template is from the pixel of a normally acquired target image to the minimum identification pixel of the target image.
10. A mobile robot operation system based on multiple cameras is characterized by comprising
The camera module is used for acquiring an indoor scene image by the camera and establishing a coordinate system;
a template construction module for establishing an image template,
the matching module is used for matching the image template with the scene image by using a gray matching algorithm and carrying out normalization processing on the cross-correlation value to obtain a matching result;
and the calculation module is used for calculating the center point coordinates and the contour coordinates of the target image in the scene image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210113040.1A CN114549978A (en) | 2022-01-29 | 2022-01-29 | Mobile robot operation method and system based on multiple cameras |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210113040.1A CN114549978A (en) | 2022-01-29 | 2022-01-29 | Mobile robot operation method and system based on multiple cameras |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114549978A true CN114549978A (en) | 2022-05-27 |
Family
ID=81673805
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210113040.1A Pending CN114549978A (en) | 2022-01-29 | 2022-01-29 | Mobile robot operation method and system based on multiple cameras |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114549978A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115811664A (en) * | 2022-11-23 | 2023-03-17 | 广州高新兴机器人有限公司 | Multi-scene fixed-point snapshot method and system, storage medium and electronic equipment |
-
2022
- 2022-01-29 CN CN202210113040.1A patent/CN114549978A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115811664A (en) * | 2022-11-23 | 2023-03-17 | 广州高新兴机器人有限公司 | Multi-scene fixed-point snapshot method and system, storage medium and electronic equipment |
CN115811664B (en) * | 2022-11-23 | 2024-08-27 | 广州高新兴机器人有限公司 | Multi-scene fixed-point snapshot method and system, storage medium and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112793564B (en) | Autonomous parking auxiliary system based on panoramic aerial view and deep learning | |
US11087169B2 (en) | Image processing apparatus that identifies object and method therefor | |
CN110570454B (en) | Method and device for detecting foreign matter invasion | |
JP2004334819A (en) | Stereo calibration device and stereo image monitoring device using same | |
WO2021083151A1 (en) | Target detection method and apparatus, storage medium and unmanned aerial vehicle | |
CN105303514A (en) | Image processing method and apparatus | |
CN110598512A (en) | Parking space detection method and device | |
CN112001298B (en) | Pedestrian detection method, device, electronic equipment and storage medium | |
CN111246098B (en) | Robot photographing method and device, computer equipment and storage medium | |
CN111160169A (en) | Face detection method, device, equipment and computer readable storage medium | |
US12092470B2 (en) | Vehicle localization method and device, electronic device and storage medium | |
CN111695373A (en) | Zebra crossing positioning method, system, medium and device | |
CN111738033A (en) | Vehicle driving information determination method and device based on plane segmentation and vehicle-mounted terminal | |
CN115578616A (en) | Training method, segmentation method and device of multi-scale object instance segmentation model | |
CN117115784A (en) | Vehicle detection method and device for target data fusion | |
CN114549978A (en) | Mobile robot operation method and system based on multiple cameras | |
CN110689556A (en) | Tracking method and device and intelligent equipment | |
CN117444450A (en) | Welding seam welding method, electronic equipment and storage medium | |
CN112115737B (en) | Vehicle orientation determining method and device and vehicle-mounted terminal | |
JP2007200364A (en) | Stereo calibration apparatus and stereo image monitoring apparatus using the same | |
CN114549976A (en) | Multi-camera-based track measurement method and system for mobile robot | |
CN114943954A (en) | Parking space detection method, device and system | |
CN114689053A (en) | Indoor positioning method and system for robot | |
CN114549977A (en) | Image acquisition method and system for robot track measurement | |
CN115830049A (en) | Corner point detection method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |