CN110120099A - Localization method, device, recognition and tracking system and computer-readable medium - Google Patents

Localization method, device, recognition and tracking system and computer-readable medium Download PDF

Info

Publication number
CN110120099A
CN110120099A CN201810119776.3A CN201810119776A CN110120099A CN 110120099 A CN110120099 A CN 110120099A CN 201810119776 A CN201810119776 A CN 201810119776A CN 110120099 A CN110120099 A CN 110120099A
Authority
CN
China
Prior art keywords
image
target
feature point
target feature
physical coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810119776.3A
Other languages
Chinese (zh)
Inventor
胡永涛
于国星
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN201810119776.3A priority Critical patent/CN110120099A/en
Priority to PCT/CN2019/073578 priority patent/WO2019154169A1/en
Publication of CN110120099A publication Critical patent/CN110120099A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of localization method, device, recognition and tracking system and computer-readable mediums, belong to technical field of image processing.This method comprises: processor obtains the target image of the visual interactive device of image acquisition device, it include multiple non-coplanar target feature points in corresponding visual interactive device in target image;Obtain pixel coordinate of each target feature point in the corresponding image coordinate system of target image;The physical coordinates obtained according to the pixel coordinate of all target feature points and in advance, obtain the position between image collecting device and visual interactive device and rotation information, wherein, physical coordinates are the coordinate of the target feature point that obtains in advance in the corresponding physical coordinates system of visual interactive device.It just can determine that the positional relationship between image collecting device and visual interactive device, accuracy are higher by the position and rotation information.

Description

Localization method, device, recognition and tracking system and computer-readable medium
Technical field
The present invention relates to technical field of image processing, more particularly, to a kind of localization method, device, recognition and tracking system System and computer-readable medium.
Background technique
In recent years, with the development of science and technology augmented reality (AR, Augmented Reality) and virtual reality (VR, Virtual Reality) etc. technologies be increasingly becoming the hot spot studied both at home and abroad.By taking augmented reality as an example, augmented reality is logical The information for crossing computer system offer increases the technology that perceive to real world of user, by the dummy object of computer generation, Into real scene, Lai Zengqiang or modification to real world environments or indicate real world ring for scene or system prompt information superposition The perception of the data in border.
In the interactive systems such as virtual reality system and augmented reality system, need to carry out recognition and tracking to target object. Existing recognition and tracking method usually uses Magnetic Sensor, optical sensor, ultrasonic wave, inertial sensor, target object image The modes such as processing are realized, but the method for these recognition and trackings, in general, the effect is unsatisfactory for recognition and tracking, such as Magnetic Sensor, light Usually by environment large effect, inertial sensor is high to required precision for sensor, ultrasonic wave etc., is badly in need of one kind in the market Completely new recognition and tracking method, to realize low cost, high-precision interaction, and to the processing of the image of target object as identification The important technology of tracking is also required to a set of perfect effective solution method.
Summary of the invention
It is above-mentioned to improve the invention proposes a kind of localization method, device, recognition and tracking system and computer-readable medium Defect.
In a first aspect, being applied to a recognition and tracking system, the system the embodiment of the invention provides a kind of localization method Including image collecting device and with the visual interactive device of multiple characteristic points.The described method includes: obtaining described image acquisition The target image with the visual interactive device of device acquisition, the interior target image includes in corresponding visual interactive device Multiple coplanar target feature points;The target feature point in target image is obtained in the corresponding image coordinate system of the target image Interior pixel coordinate;The target signature obtained according to the pixel coordinate of the target feature point in the target image and in advance The corresponding physical coordinates of point, obtain the position between described image acquisition device and the visual interactive device and rotation information, Wherein, the physical coordinates are the target feature point that obtains in advance in the corresponding physical coordinates system of the visual interactive device Interior coordinate.
Second aspect, the embodiment of the invention also provides a kind of positioning device, applied to the processor of recognition and tracking system, The system also includes image collecting device and with the visual interactive device of multiple characteristic points.Described device includes: first to obtain Take unit, second acquisition unit and processing unit.First acquisition unit, for obtaining described in the acquisition of described image acquisition device The target image of visual interactive device, the interior target image includes the multiple coplanar targets spies corresponded in visual interactive device Sign point.Second acquisition unit, for obtaining the target feature point in target image in the corresponding image coordinate system of the target image Interior pixel coordinate.Processing unit, for being obtained according to the pixel coordinate of the target feature point in the target image with preparatory The corresponding physical coordinates of the target feature point, obtain the position between described image acquisition device and the visual interactive device It sets and rotation information, wherein the physical coordinates are the target feature point that obtains in advance in the visual interactive device pair The coordinate in physical coordinates system answered.
The third aspect the embodiment of the invention also provides a kind of recognition and tracking system, including image collecting device and has The visual interactive device of multiple characteristic points, described image acquisition device are connect with a processor.Described image acquisition device is used for The target image of the visual interactive device is acquired, includes multiple coplanar in corresponding visual interactive device in the target image Target feature point.The processor is used for: obtaining the target of the visual interactive device of described image acquisition device acquisition Image;Obtain pixel coordinate of the target feature point in target image in the corresponding image coordinate system of the target image;According to The pixel coordinate of target feature point in the target image and the corresponding physical coordinates of the target feature point obtained in advance, Obtain the position between described image acquisition device and the visual interactive device and rotation information, wherein the physical coordinates The coordinate for being the target feature point that obtains in advance in the corresponding physical coordinates system of the visual interactive device.
Fourth aspect, the embodiment of the invention also provides a kind of computers of program code that can be performed with processor can Medium is read, said program code makes the processor execute the above method.
Localization method, device, recognition and tracking system and computer-readable medium provided in an embodiment of the present invention, by obtaining After getting the target image of the visual interactive device of image acquisition device, multiple targets in target image are determined Characteristic point obtains pixel coordinate of each target feature point in the corresponding image coordinate system of target image, according to all described The pixel coordinate and physical coordinates of target feature point obtain the position between described image acquisition device and the visual interactive device Set and rotation information, as a result, by the position and rotation information just can determine image collecting device and visual interactive device it Between positional relationship, accuracy is higher.
Other feature and advantage of the embodiment of the present invention will illustrate in subsequent specification, also, partly from specification In become apparent, or by implement the embodiment of the present invention understand.The purpose of the embodiment of the present invention and other advantages can It is achieved and obtained by structure specifically indicated in the written description, claims, and drawings.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for For those skilled in the art, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 shows the structural schematic diagram of recognition and tracking system provided in an embodiment of the present invention;
Fig. 2 shows the schematic diagrames for the marker that one embodiment of the invention provides;
Fig. 3 show another embodiment of the present invention provides marker schematic diagram;
Fig. 4 shows the method flow diagram of the localization method of one embodiment of the invention offer;
Fig. 5 shows the schematic diagram of camera coordinates system provided in an embodiment of the present invention;
Fig. 6 shows the schematic diagram of physical coordinates system provided in an embodiment of the present invention;
Fig. 7 show another embodiment of the present invention provides localization method method flow diagram;
Fig. 8 shows the module frame chart of the positioning device of one embodiment of the invention offer;
Fig. 9 show another embodiment of the present invention provides positioning device module frame chart.
Specific embodiment
Below in conjunction with attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete Ground description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.Usually exist The component of the embodiment of the present invention described and illustrated in attached drawing can be arranged and be designed with a variety of different configurations herein.Cause This, is not intended to limit claimed invention to the detailed description of the embodiment of the present invention provided in the accompanying drawings below Range, but it is merely representative of selected embodiment of the invention.Based on the embodiment of the present invention, those skilled in the art are not doing Every other embodiment obtained under the premise of creative work out, shall fall within the protection scope of the present invention.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.Meanwhile of the invention In description, term " first ", " second " etc. are only used for distinguishing description, are not understood to indicate or imply relative importance.
Referring to Fig. 1, showing recognition and tracking system provided in an embodiment of the present invention.Recognition and tracking system 10 includes wearing Display device 100 and visual interactive device.
The visual interactive device includes the first background and is distributed at least one label of the first background according to ad hoc rules Object.The marker includes the second background and is distributed in several sub- markers of the second background, every height according to ad hoc rules Marker has one or more features point.Wherein, the first background and the second background have certain discrimination, for example, it may be First background is black, and the second background is white.In present embodiment, the distribution rule of the sub- marker in each marker is not Together, therefore, image corresponding to each marker is different.
Sub- marker is to have effigurate pattern, and the second background in the color and marker of the sub- marker has Certain discrimination, for example, the second background is white, and the color of sub- marker is black.Sub- marker can be by one Or multiple characteristic points are constituted, and the shape of characteristic point is without limitation, can be dot, annulus, is also possible to triangle, other shapes Shape.
As an implementation, as shown in Fig. 2, including multiple sub- markers 220 in marker 210, and every height mark Note object 220 is made of one or more features point 221, and each of Fig. 2 white circular pattern is a characteristic point 221.Label The profile of object 210 is rectangle, and certainly, the shape of marker is also possible to other shapes, it is not limited here, in Fig. 2, rectangle Multiple sub- markers in white area and the white area constitute a marker.
As another embodiment, as shown in figure 3, including multiple sub- markers 340 in marker 310, and every height Marker 340 is made of one or more features point 341;Wherein, multiple black dots constitute a sub- marker 340.Specifically Ground, in Fig. 3, each white circular pattern and each black dot pattern are characteristic point 341.
Specifically, visual interactive device includes plane marker object and multi-panel mark structure body.The plane marker object packet The first marking plate 200 and the second marking plate 500 are included, which includes six face mark structure bodies 400 and 26 Face mark structure body 300 can also be the mark structure body of other faces number certainly, numerous to list herein.
Multiple markers are provided on first marking plate 200, the content of multiple markers is different, and the first marking plate In the same plane, i.e. the first marking plate 200 is equipped with an index face, and all marks for multiple markers setting on 200 Note object is arranged in the index face of the first marking plate 200, and the characteristic point on the first marking plate 200 is in index face;Second A marker is provided on marking plate 500, the characteristic point on the second marking plate 500 is also entirely in index face second The quantity of marking plate 500 can be multiple, and the content of the marker of each second marking plate 500 is different, and multiple second Marking plate 500 can be applied in combination, such as in the application fields such as the corresponding augmented reality of recognition and tracking system 10 or virtual reality It is applied in combination.
Multi-panel mark structure body includes multiple index faces, and is provided with label in the non-coplanar index face of wherein at least two Object, as shown in fig. 1, the multi-panel mark structure body include six face mark structure bodies 400 and 26 face mark structure bodies 300, Wherein, six face mark structure bodies 400 include 6 index faces, and marker are provided in each index face, and on each face The pattern of marker is different.
And 26 face mark structure bodies 300 include 26 faces, wherein including 17 index faces, and each label Marker is provided on face, and the pattern of the marker on each face is different.Certainly, above-mentioned multi-panel mark structure body Total face number and index face description and marker setting, can be arranged according to actual use, it is not limited here.
It should be noted that visual interactive device is not limited to above-mentioned plane marker object and multi-panel mark structure body, Visual interactive device can be any carrier with marker, and carrier can be arranged according to actual scene, such as peashooter, game Corresponding marker is arranged on the visual interactives device such as model rifle in the models rifle such as rifle, passes through the mark on identification tracing model rifle Remember object, position and the rotation information of model rifle can be obtained, user carries out game by holding the model rifle in virtual scene Operation, realizes the effect of augmented reality.
Head-wearing display device 100 includes shell (not identifying), image collecting device 110, processor 140, display device 120, optical module 130 and lighting device 150.
Wherein, display device 120 and image collecting device 110 are electrically connected with processor;In some embodiments, it shines Bright device 150 and image collecting device 110 are installed by filter (not identifying) and are covered in shell, which can mistake The interference lights such as environment light are filtered, if lighting device 150 emits infrared light, then the filter can be the light filtered out outside infrared light The element of line.
Image collecting device 110 is used to acquire the image of object to be shot and is sent to processor.Specifically, acquisition includes There is the image of at least one in above-mentioned marking plate or multi-panel mark structure body, and is sent to processor.As an implementation, The image collecting device 110 is the near infrared imaging camera of monocular.In current embodiment, image collecting device 110 is to use Infrared receiver mode and be monocular camera, it is not only at low cost, without the outer ginseng between binocular camera, and low in energy consumption, on an equal basis Frame per second is higher under bandwidth.
Processor 140 is used to export corresponding display content to display device 120 according to image, is also used to visual interactive The operation of device progress recognition and tracking.
Processor 140 may include the general or specialized microprocessor of any appropriate type, digital signal processor or micro- Controller.Processor 140, which can be configured as, receives data and/or signal from the various assemblies of system via such as network.Place Reason device 140 can also handle data and/or signal to determine one or more operating conditions in system.For example, working as processor 140 When applied to head-wearing display device, processor image data according to the pre-stored data generates the image data of virtual world, by it It is sent to display device and is shown by optical module;Intelligent terminal or meter can also be received by wired or wireless network The image data of the transmission of calculation machine, according to received image data generate the image of virtual world, carried out by optical module Display;Recognition and tracking operation can also be carried out according to the image of image acquisition device and determine corresponding in virtual world It shows content, is sent to display device and is shown by optical module.It is understood that processor 140 does not limit In being installed in head-wearing display device.
In some embodiments, head-wearing display device 100 further includes the vision mileage camera 160 being arranged on shell, Wherein, vision mileage camera 160 is electrically connected with processor, which is used to acquire the field of extraneous real scene Scene image is sent to processor by scape image.When user dresses head-wearing display device 100, processor is according in the vision The scene image that Cheng Xiangji 160 is acquired obtains the head of user and position and the rotation of real scene according to vision mileage technology Relationship, specifically, the image sequence that system is obtained by the camera is estimated by feature extraction, characteristic matching and tracking and movement The processing of meter obtains the variation of specific position and direction, completes navigator fix, and then obtain head-wearing display device and true field The relative position of scape and rotation relationship;Position and rotation information further according to visual interactive device relative to head-wearing display device, The relative position between visual interactive device and real scene and rotation relationship can be extrapolated, it is more complicated so as to realize Interactive form and experience.
Display device 120 will be for that will show that content is shown.In some embodiments, display device can be intelligent terminal A part, the i.e. display screen of intelligent terminal, such as the display screen of mobile phone and tablet computer.In further embodiments, display dress It sets and can also be independent display (for example, LED, OLED or LCD) etc., then display device is fixedly mounted on shell at this time.
It should be noted that being provided on shell for installing this when display device 120 is the display screen of intelligent terminal The mounting structure of intelligent terminal.When in use, intelligent terminal is mounted on shell by mounting structure.Then processor 140 can To be the processor in intelligent terminal, it is also possible to be independently arranged at the intracorporal processor of shell, and pass through data with intelligent terminal Line or communication interface electrical connection.In addition, when display device 120 is the display device separated with terminal devices such as intelligent terminals, It is fixedly mounted on shell.
Optical module 130 is used for the incident light directive predetermined position for issuing the light-emitting surface of display device 120.Wherein, Predetermined position is the observation position of user's eyes.
Lighting device 150 is used to provide light when acquiring the image of object to be shot for image collecting device 110.Specifically Ground, the light angle of lighting device 150 and the quantity of lighting device 150, can set according to actual use, so as to be sent out The illuminating ray penetrated can cover object to be shot.Wherein, lighting device 150 uses infrared light illuminating devices, can issue red UV light, image collector is set near infrared camera at this time, can receive infrared light.By way of active illumination, improve The picture quality for the target image that image collecting device 110 acquires, specifically, the quantity of lighting device 150 is unlimited, can be one It is a, it is also possible to multiple.In some embodiments, lighting device 150 is arranged near image collecting device 110, wherein It can be multiple lighting devices 150 to be circumferentially arranged near the camera of image collecting device 110.
User is wearing head-wearing display device 100, when into preset virtual scene, when the visual interactive device is in image Acquisition device 110 within sweep of the eye when, image collecting device 110 collect include the visual interactive device target figure Picture;Processor 140 gets the target image and relevant information, and operation identifies the visual interactive device and gets the target Position between marker and image collecting device and rotation relationship in image, and then visual interactive device is obtained relative to head Position and the rotation relationship of display device are worn, so that the virtual scene that user watches is on corresponding position and rotation angle; The new virtual image that user can also further be generated in virtual scene by the combination of multiple visual interactive devices, gives User brings better experience effect;User can also realize the interaction with virtual scene by visual interactive device;In addition, should Identify that tracing system can also obtain position and the rotation relationship of head-wearing display device and real scene by vision mileage camera, And then position and the rotation relationship of available visual interactive device and real scene, when virtual scene and real scene have centainly Corresponding relationship when, a virtual scene similar with real scene can be constructed, more true augmented reality can be improved Experience.
For the above-mentioned recognition and tracking system that can be applied in virtual reality system and augmented reality system, the present invention is implemented Example provide it is a kind of when image capture device collects the characteristic point of visual interactive device in conplane image, to vision Interactive device carries out the localization method of tracking and positioning, specifically, referring to Fig. 4, a kind of localization method shown.This method application In recognition and tracking system 10 shown in FIG. 1, using processor as executing subject, this method comprises: S401 to S403.
S401: the target image with visual interactive device of described image acquisition device acquisition, the target figure are obtained As interior multiple coplanar target feature points including in corresponding visual interactive device.
Wherein, all characteristic points in target image are coplanar, i.e., all characteristic points are located in approximately the same plane.Specifically, What target image can be image acquisition device includes the image of the index face of above-mentioned plane marker object;Work as acquisition Image in visual interactive device when including multi-panel mark structure body, target image can also be to include only to collect multi-panel The image of some index face of mark structure body.Target image is image acquisition device with visual interactive device Image, the interior information including multiple characteristic points of the target image.Wherein, the characteristic point in the target image can be vision friendship All characteristic points in mutual device, the Partial Feature point being also possible in all characteristic points in visual interactive device.
It is possible to further arbitrarily choose the image of certain amount of characteristic point from all characteristic points in target image As target feature point, for determining image collecting device (being equivalent to head-wearing display device) and the plane with target feature point Mark object or image collecting device (being equivalent to head-wearing display device) and the multi-panel mark structure body with target feature point it Between true position and rotation information.
S402: it obtains pixel of the target feature point in target image in the corresponding image coordinate system of the target image and sits Mark.
Wherein, the pixel coordinate of the target feature point in target image refers to the position of this feature point in the target image, The pixel coordinate of each target feature point in the target image can be obtained directly in the image of the corresponding shooting of image capture device ?.For example, as shown in figure 5, I1 is target image, image coordinate system uov, wherein the direction of u by taking the first marking plate as an example Can be the line direction of the picture element matrix in target image, the direction of v can be the column direction of the picture element matrix in target image, And the position of the origin o in image coordinate system can choose an angle point of target image, for example, the most upper left corner or the most lower left corner Point, pixel coordinate of each characteristic point in image coordinate system just can determine as a result,.For example, the characteristic point 221a in Fig. 5 Pixel coordinate be (ua, va)。
S403: the target signature obtained according to the pixel coordinate of the target feature point in the target image and in advance The corresponding physical coordinates of point, obtain the position between described image acquisition device and the visual interactive device and rotation information.
Wherein, the physical coordinates are that the target feature point obtained in advance is sat in the corresponding physics of the visual interactive device Coordinate in mark system, the physical coordinates of target feature point are true position of the target feature point on corresponding visual interactive device It sets.The physical coordinates of each characteristic point can obtain in advance, and specifically, multiple characteristic points and the setting of multiple markers are handed in vision In the index face of mutual device, some point on selected marker face is used as origin, establishes physical coordinates system.Using index face as object The XOY plane of coordinate system is managed, the origin of XOY coordinate system is located in index face.
As an implementation, as shown in fig. 6, by taking marking plate is rectangular slab as an example, with the one of the index face of marking plate A angle point is as origin O, using the length direction of index face as X-axis, using the width direction of index face as Y-axis, perpendicular to index face Direction be Z axis, establish physical coordinates system, the distance of each characteristic point to X-axis and Y-axis can obtain, thus, it will be able to true Fixed physical coordinates of each characteristic point in physical coordinates system, for example, the physical coordinates of the characteristic point 221a in Fig. 6 are (Xa, Ya, Za).Wherein, ZaEqual to 0.
In getting target image after the pixel coordinate and physical coordinates of all target feature points, according to each label The pixel coordinate and physical coordinates of all target feature points in object obtain between described image acquisition device and the marker Position and rotation information, specifically, according to the pixel coordinate of each target feature point, physical coordinates and the figure that obtains in advance As the intrinsic parameter of acquisition device, the mapping parameters between described image coordinate system and the physical coordinates system are obtained.
Specifically, the relationship between image coordinate system and physical coordinates system are as follows:
Wherein, (u, v) is characterized the pixel coordinate a little in the image coordinate system of target image, and (X, Y, Z) is characterized a little In the physical coordinates of physical coordinates system, then Z is set as 0, the physical coordinates under physical coordinates system are (X, Y, 0).
It is the matrix in a camera matrix or one in parameter, (cx, cy) is the center of image Point, (fx, fy) are the focal lengths indicated with pixel unit, which can be obtained by the proving operation of image capture device, are One known quantity.
Wherein,For the matrix of external parameter, first three is classified as rotation parameter, and the 4th is classified as translation Parameter.DefinitionFor homography matrix H, then above formula (1) becomes:
Therefore, by the pixel coordinate of acquired multiple target feature points and physical coordinates and image collecting device Intrinsic parameter brings above formula (2) into, it will be able to obtain H, i.e., the mapping ginseng between described image coordinate system and the physical coordinates system Number.
Further according to the mapping parameters obtain described image acquisition device camera coordinates system and the physical coordinates system it Between rotation parameter and translation parameters specifically can be according to svd algorithm:
Above-mentioned homography matrix H is done into singular value decomposition, obtains following formula:
H=U Λ VT(3)
Then available two orthogonal matrix U and V and diagonal matrix Λ.Wherein, diagonal matrix Λ includes and singly answers The singular value of property matrix H.Accordingly it is also possible to above formula (3) can then be write as by this diagonal matrix as homography matrix H:
When matrix H is broken down into diagonal matrix, it will be able to calculate spin matrix R and translation matrix T.Specifically, tΛ It can be eliminated in three vector equations separated by above-mentioned formula (4), due to RΛIt is an orthogonal matrix, then it can be with Pass through each parameter in a new equation group linear solution normal vector n, wherein equation group by each parameter in normal vector n with The singular value of homography matrix H is associated.
By above-mentioned decomposition algorithm, the different solution formula of 8 of available above three unknown quantity, wherein this three are not The amount of knowing are as follows: { RΛ, tΛ, nΛ}.Then, it is assumed that the decomposition of matrix Λ is completed, then in order to obtain final decomposing element, we are only needed To use following expression formula:
Place R and T can be solved as a result, wherein camera coordinates system and the physical coordinates of the R for image collecting device Rotation parameter between system, T are the translation parameters between the camera coordinates system and the physical coordinates system of image collecting device.
Then, using rotation parameter and translation parameters as between image collecting device and the marking plate position and rotation Information.Wherein, rotation parameter indicates that rotation status namely image collecting device between camera coordinates system and physical coordinates system exist Rotational freedom in physical coordinates system, with each reference axis of physical coordinates system.Wherein, translation parameters indicate camera coordinates system with Each coordinate of the moving condition namely image collecting device between physical coordinates system in physical coordinates system, with physical coordinates system The one-movement-freedom-degree of axis.Then rotation parameter and translation parameters are image capture device six freely believing in physical coordinates system Breath, can indicate rotation and moving condition of the image capture device in physical coordinates system, can also obtain Image Acquisition and set The angle and distance etc. between each reference axis in the standby visual field and physical coordinates system.
Referring to Fig. 7, showing a kind of localization method.This method is applied to recognition and tracking system 10 shown in FIG. 1, with place Device is managed as executing subject, this method comprises: S701 to S705.
S701: the target image with visual interactive device of described image acquisition device acquisition is obtained.
S702: judge in the target image with the presence or absence of the marker for including target feature point.
Since each characteristic point is distributed across in marker, by whether there is marker in detection target image, from And it can judge in target image collected with the presence or absence of characteristic point.
Judge that the mode in target image with the presence or absence of marker can be, by the image of the marker in target image with The images match of all markers on pre-stored visual interactive device, if it is possible to be matched to similar or identical mark Remember object, then determine that there are markers in target image, then enters next step process.If similar or identical mark can not be matched to Remember object, then determines that there is no markers in target image, then return and continue to execute S701, that is, resurvey target image, until Determine that there are markers for target image.
Wherein, the determination of the marker in target image, can be by searching for the wheel of target image Internal periphery and marker The region that all profiles in target image are rectangle is searched, as to be confirmed so that marker is rectangle as an example in wide consistent region Marker, then by the image of all markers on each marker to be confirmed and pre-stored visual interactive device Match, if it is possible to be matched to similar or identical marker, then determine that there are markers in target image, otherwise, it is determined that target Marker is not present in image.
S703: judge whether the quantity of target feature point is greater than or equal to preset value.
The target feature point can be the arbitrary characteristic point in target image, due to will be according to mesh in subsequent step Six-degree-of-freedom information of the pixel coordinate and physical coordinates acquisition image capture device of mark characteristic point in physical coordinates system, and During solution, a certain number of target feature points are needed to set up multiple equation groups, therefore, it is necessary to the mesh in target image The quantity for marking characteristic point is greater than or equal to preset value, wherein and preset value is numerical value set by user, in the embodiment of the present invention, The preset value is 4.Wherein, it can be distributed across in a marker more than or equal to the target feature point of preset value, it can also be with It is distributed in multiple markers, as long as the quantity of the characteristic point in target image is greater than or equal to preset value.
S704: it obtains pixel of each target feature point in the corresponding image coordinate system of the target image and sits Mark.
Specific embodiment can refer to previous embodiment, and details are not described herein.In some embodiments, if image It is impossible to meet standard is used, i.e. there is distortion, then need to go to distort to the target image in captured image with acquisition device Processing.
Specifically, distortion is gone to handle the target image, to remove the distortion point in the target image;It will be through abnormal The target image that target image after change processing is obtained as this obtains each target feature point in the target figure As the pixel coordinate in corresponding image coordinate system.
Pattern distortion refers to the geometric position of generated image picture elements in imaging process relative to reference system (ground reality Border position or topographic map) deformation such as extruding, stretching, extension, offset and distortion that occurs, make geometric position, size, shape, the side of image Position etc. changes.Common distortion includes radial distortion, decentering distortion and thin prism distortion.According to the abnormal of image collecting device Variable element and distortion model go distortion to handle target image.
S705: coordinate of each target feature point in the corresponding physical coordinates system of visual interactive device is obtained.
In embodiments of the present invention, in order to by the target feature point in target image and the characteristic point phase in physical coordinates system It is corresponding, it needs by preset mark object model, specifically, according to the pixel coordinate and physical coordinates of the target feature point, obtains Before taking position and the rotation information between described image acquisition device and the visual interactive device, this method further include: really Fixed each target feature point model characteristic point corresponding in preset mark object model;Search the preset mark object mould Physical coordinates of each model characteristic point in the corresponding physical coordinates system of the visual interactive device in type;It will be each described The physical coordinates of model characteristic point corresponding to target feature point are corresponding in the visual interactive device as the target feature point Physical coordinates system in physical coordinates.
Preset mark object model can be according to the distribution of each characteristic point on visual interactive device and establish virtual Visual interactive device, include multiple model characteristic points in the preset mark object model, and each model characteristic point is corresponding One physical coordinates in the corresponding physical coordinates system of the visual interactive device, for example, it may be having the vertical of a face Body structure, and model characteristic point distribution is on the same face.In addition, the position of each model characteristic point corresponds to visual interactive dress The position for the characteristic point set.
After obtaining preset mark object model, determine that each target feature point institute in preset mark object model is right The model characteristic point answered.Specifically, each target feature point is mapped to the corresponding coordinate of the preset mark object model In system, to obtain coordinate of each target feature point in the corresponding coordinate system of the preset mark object model.
The coordinate of pixel coordinate of the target feature point in target image coordinate system corresponding with preset mark object model is deposited The target feature point can be obtained in the corresponding coordinate system of preset mark object model according to the mapping relations in mapping relations Coordinate value.
S706: according to the pixel coordinate and physical coordinates of all target feature points, described image acquisition device is obtained Position and rotation information between the visual interactive device.
It should be noted that being the part of detailed description in above-mentioned steps, previous embodiment can refer to, it is no longer superfluous herein It states.
Referring to Fig. 8, showing a kind of positioning device 800 provided in an embodiment of the present invention, which is applied to shown in Fig. 1 Recognition and tracking system 10 processor, specifically, positioning device 800 includes: first acquisition unit 801, second acquisition unit 802 and processing unit 803.
First acquisition unit 801, the target of the visual interactive device for obtaining the acquisition of described image acquisition device Image, the interior target image includes multiple coplanar target feature points.
Second acquisition unit 802 is sat for obtaining each target feature point in the corresponding image of the target image Pixel coordinate in mark system.
Processing unit 803, for what is obtained according to the pixel coordinate of the target feature point in the target image and in advance The corresponding physical coordinates of the target feature point, obtain the position between described image acquisition device and the visual interactive device And rotation information, wherein the physical coordinates are that the target feature point obtained in advance is corresponded in the visual interactive device Physical coordinates system in coordinate.
It is apparent to those skilled in the art that for convenience and simplicity of description, the device of foregoing description It with the specific work process of unit, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
Referring to Fig. 9, showing a kind of positioning device 900 provided in an embodiment of the present invention, which is applied to shown in Fig. 1 Recognition and tracking system 10 processor, specifically, positioning device 900 include: first acquisition unit 901, judging unit 902, Second acquisition unit 903, physical coordinates acquiring unit 904, processing unit 905.
First acquisition unit 901, the target of the visual interactive device for obtaining the acquisition of described image acquisition device Image, the interior target image includes multiple coplanar target feature points.
Judging unit 902, for judging in the target image with the presence or absence of marker.
Second acquisition unit 903 is sat for obtaining each target feature point in the corresponding image of the target image Pixel coordinate in mark system.
Specifically, second acquisition unit 903 is used to judge whether the quantity of multiple target feature points is greater than preset value, If more than pixel coordinate of each target feature point of acquisition in the corresponding image coordinate system of the target image.
Physical coordinates acquiring unit 904, for determining that each target feature point institute in preset mark object model is right The model characteristic point answered;The each model characteristic point searched in the preset mark object model is corresponding in the visual interactive device Physical coordinates system in physical coordinates;Using the physical coordinates of model characteristic point corresponding to each target feature point as Physical coordinates of the target feature point in the corresponding physical coordinates system of the visual interactive device.
Processing unit 905, for what is obtained according to the pixel coordinate of the target feature point in the target image and in advance The corresponding physical coordinates of the target feature point, obtain the position between described image acquisition device and the visual interactive device And rotation information, wherein the physical coordinates are that the target feature point obtained in advance is corresponded in the visual interactive device Physical coordinates system in coordinate.
It is apparent to those skilled in the art that for convenience and simplicity of description, the device of foregoing description It with the specific work process of unit, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
In conclusion localization method provided in an embodiment of the present invention, device, recognition and tracking system and computer-readable Jie Matter, by determining target image after getting the target image of the visual interactive device of image acquisition device Interior multiple target feature points obtain pixel coordinate of each target feature point in the corresponding image coordinate system of target image, According to the pixel coordinate and physical coordinates of all target feature points, described image acquisition device and the visual interactive are obtained Position and rotation information between device just can determine image collecting device and view by the position and rotation information as a result, Feel that the positional relationship between interactive device, accuracy are higher.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for Instruction execution system, device or equipment (such as computer based system, including the system of processor or other can be held from instruction The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set It is standby and use.For the purpose of this specification, " computer-readable medium ", which can be, any may include, stores, communicates, propagates or pass Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment It sets.The more specific example (non-exhaustive list) of computer-readable medium include the following: there is the electricity of one or more wirings Interconnecting piece (mobile terminal), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory (ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable optic disk is read-only deposits Reservoir (CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other are suitable Medium, because can then be edited, be interpreted or when necessary with it for example by carrying out optical scanner to paper or other media His suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the invention can be realized with hardware, software, firmware or their combination.Above-mentioned In embodiment, software that multiple steps or method can be executed in memory and by suitable instruction execution system with storage Or firmware is realized.It, and in another embodiment, can be under well known in the art for example, if realized with hardware Any one of column technology or their combination are realized: having a logic gates for realizing logic function to data-signal Discrete logic, with suitable combinational logic gate circuit specific integrated circuit, programmable gate array (PGA), scene Programmable gate array (FPGA) etc..
Those skilled in the art are understood that realize all or part of step that above-described embodiment method carries It suddenly is that relevant hardware can be instructed to complete by program, the program can store in a kind of computer-readable storage medium In matter, which when being executed, includes the steps that one or a combination set of embodiment of the method.In addition, in each embodiment of the present invention In each functional unit can integrate in a processing module, be also possible to each unit and physically exist alone, can also two A or more than two units are integrated in a module.Above-mentioned integrated module both can take the form of hardware realization, can also It is realized in the form of using software function module.If the integrated module realized in the form of software function module and as Independent product when selling or using, also can store in a computer readable storage medium.

Claims (10)

1. a kind of localization method is applied to a recognition and tracking system, the system comprises image collecting device and there are multiple spies Levy the visual interactive device of point;It is characterized in that, which comprises
Obtain the target image with the visual interactive device of described image acquisition device acquisition, packet in the target image Include multiple coplanar target feature points in corresponding visual interactive device;
Obtain pixel coordinate of the target feature point in target image in the corresponding image coordinate system of the target image;
The target feature point obtained according to the pixel coordinate of the target feature point in the target image and in advance is corresponding Physical coordinates obtain the position between described image acquisition device and the visual interactive device and rotation information, wherein described Physical coordinates are the coordinate of the target feature point that obtains in advance in the corresponding physical coordinates system of the visual interactive device.
2. the method according to claim 1, wherein the target feature point according in the target image Pixel coordinate and physical coordinates obtain the position between described image acquisition device and the visual interactive device and rotation information Before, the method also includes:
Determine target feature point model characteristic point corresponding in preset mark object model;
The model characteristic point in the preset mark object model is searched in the corresponding physical coordinates system of the visual interactive device Physical coordinates;
It is handed over using the physical coordinates of model characteristic point corresponding to the target feature point as the target feature point in the vision Physical coordinates in the corresponding physical coordinates system of mutual device.
3. according to the method described in claim 2, it is characterized in that, each target feature point of the determination is obtained in advance Preset mark object model in corresponding model characteristic point, comprising:
The target feature point is mapped in the corresponding coordinate system of the preset mark object model, to obtain the target signature Coordinate of the point in the corresponding coordinate system of the preset mark object model;
Will be in the corresponding coordinate system of the preset mark object model, the nearest model with the coordinate distance of the target feature point Characteristic point is as the corresponding model characteristic point of the target feature point.
4. the method according to claim 1, wherein the target feature point according in the target image Pixel coordinate and the corresponding physical coordinates of the target feature point obtained in advance obtain described image acquisition device and the view Feel the position between interactive device and rotation information, comprising:
According to the pixel coordinate of the target feature point, physical coordinates and the internal reference of described image acquisition device that obtains in advance Number obtains the mapping parameters between described image coordinate system and the physical coordinates system;
The rotation between the camera coordinates system and the physical coordinates system of described image acquisition device is obtained according to the mapping parameters Turn parameter and translation parameters;
The position between described image acquisition device and the visual interactive device is obtained according to the rotation parameter and translation parameters It sets and rotation information.
5. the method according to claim 1, wherein described obtain the target feature point in the target image Pixel coordinate in corresponding image coordinate system, comprising:
Judge whether the quantity of multiple target feature points is greater than preset value;
If more than obtaining pixel coordinate of the target feature point in the corresponding image coordinate system of the target image.
6. the method according to requiring 1, which is characterized in that the acquisition target feature point is corresponding in the target image Image coordinate system in pixel coordinate before, the method also includes:
Distortion is gone to handle the target image, to remove the distortion point in the target image;
Using the target image after being handled through distortion as the target image obtained.
7. a kind of positioning device, applied to the processor of a recognition and tracking system, the system also includes image collecting device and Visual interactive device with multiple characteristic points;It is characterized in that, described device includes:
First acquisition unit, the target image of the visual interactive device for obtaining the acquisition of described image acquisition device, institute Stating includes the multiple coplanar target feature points corresponded in visual interactive device in target image;
Second acquisition unit, for obtaining the target feature point in target image in the corresponding image coordinate system of the target image Pixel coordinate;
Processing unit, the target for obtaining according to the pixel coordinate of the target feature point in the target image and in advance The corresponding physical coordinates of characteristic point, the position and rotation obtained between described image acquisition device and the visual interactive device are believed Breath, wherein the physical coordinates are that the target feature point obtained in advance is sat in the corresponding physics of the visual interactive device Coordinate in mark system.
8. device according to claim 7, which is characterized in that further include physical coordinates acquiring unit, be used for:
Determine target feature point model characteristic point corresponding in preset mark object model;
The model characteristic point in the preset mark object model is searched in the corresponding physical coordinates system of the visual interactive device Physical coordinates;
It is handed over using the physical coordinates of model characteristic point corresponding to the target feature point as the target feature point in the vision Physical coordinates in the corresponding physical coordinates system of mutual device.
9. a kind of recognition and tracking system, which is characterized in that the visual interactive including image collecting device and with multiple characteristic points Device, described image acquisition device are connect with a processor;
Described image acquisition device is used to acquire the target image of the visual interactive device, includes corresponding in the target image Multiple coplanar target feature points in visual interactive device;
The processor is used for:
Obtain the target image of the visual interactive device of described image acquisition device acquisition;
Obtain pixel coordinate of the target feature point in target image in the corresponding image coordinate system of the target image;
The target feature point obtained according to the pixel coordinate of the target feature point in the target image and in advance is corresponding Physical coordinates obtain the position between described image acquisition device and the visual interactive device and rotation information, wherein described Physical coordinates are the coordinate of the target feature point that obtains in advance in the corresponding physical coordinates system of the visual interactive device.
10. a kind of computer-readable medium for the program code that can be performed with processor, which is characterized in that said program code The processor is set to execute any one of claim 1-6 the method.
CN201810119776.3A 2018-02-06 2018-02-06 Localization method, device, recognition and tracking system and computer-readable medium Pending CN110120099A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201810119776.3A CN110120099A (en) 2018-02-06 2018-02-06 Localization method, device, recognition and tracking system and computer-readable medium
PCT/CN2019/073578 WO2019154169A1 (en) 2018-02-06 2019-01-29 Method for tracking interactive apparatus, and storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810119776.3A CN110120099A (en) 2018-02-06 2018-02-06 Localization method, device, recognition and tracking system and computer-readable medium

Publications (1)

Publication Number Publication Date
CN110120099A true CN110120099A (en) 2019-08-13

Family

ID=67520036

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810119776.3A Pending CN110120099A (en) 2018-02-06 2018-02-06 Localization method, device, recognition and tracking system and computer-readable medium

Country Status (1)

Country Link
CN (1) CN110120099A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110428468A (en) * 2019-08-12 2019-11-08 北京字节跳动网络技术有限公司 A kind of the position coordinates generation system and method for wearable display equipment
CN110598605A (en) * 2019-09-02 2019-12-20 广东虚拟现实科技有限公司 Positioning method, positioning device, terminal equipment and storage medium
CN110659587A (en) * 2019-09-02 2020-01-07 广东虚拟现实科技有限公司 Marker, marker identification method, marker identification device, terminal device and storage medium
CN110782492A (en) * 2019-10-08 2020-02-11 三星(中国)半导体有限公司 Pose tracking method and device
CN110956642A (en) * 2019-12-03 2020-04-03 深圳市未来感知科技有限公司 Multi-target tracking identification method, terminal and readable storage medium
CN111145259A (en) * 2019-11-28 2020-05-12 上海联影智能医疗科技有限公司 System and method for automatic calibration
CN111178127A (en) * 2019-11-20 2020-05-19 青岛小鸟看看科技有限公司 Method, apparatus, device and storage medium for displaying image of target object
CN112330747A (en) * 2020-09-25 2021-02-05 中国人民解放军军事科学院国防科技创新研究院 Multi-sensor combined detection and display method based on unmanned aerial vehicle platform
CN112991461A (en) * 2021-03-11 2021-06-18 珠海格力智能装备有限公司 Material assembling method and device, computer readable storage medium and processor
US11610330B2 (en) 2019-10-08 2023-03-21 Samsung Electronics Co., Ltd. Method and apparatus with pose tracking

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102646275A (en) * 2012-02-22 2012-08-22 西安华旅电子科技有限公司 Method for realizing virtual three-dimensional superposition through tracking and positioning algorithms
CN103377374A (en) * 2012-04-23 2013-10-30 索尼公司 Image processing apparatus, image processing method, and program
CN106296676A (en) * 2016-08-04 2017-01-04 合肥景昇信息科技有限公司 The object positioning method that view-based access control model is mutual
CN106780609A (en) * 2016-11-28 2017-05-31 中国电子科技集团公司第三研究所 Vision positioning method and vision positioning device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102646275A (en) * 2012-02-22 2012-08-22 西安华旅电子科技有限公司 Method for realizing virtual three-dimensional superposition through tracking and positioning algorithms
CN103377374A (en) * 2012-04-23 2013-10-30 索尼公司 Image processing apparatus, image processing method, and program
CN106296676A (en) * 2016-08-04 2017-01-04 合肥景昇信息科技有限公司 The object positioning method that view-based access control model is mutual
CN106780609A (en) * 2016-11-28 2017-05-31 中国电子科技集团公司第三研究所 Vision positioning method and vision positioning device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘经伟: "中国学术期刊(光盘版)电子杂志社", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110428468A (en) * 2019-08-12 2019-11-08 北京字节跳动网络技术有限公司 A kind of the position coordinates generation system and method for wearable display equipment
CN110598605A (en) * 2019-09-02 2019-12-20 广东虚拟现实科技有限公司 Positioning method, positioning device, terminal equipment and storage medium
CN110659587A (en) * 2019-09-02 2020-01-07 广东虚拟现实科技有限公司 Marker, marker identification method, marker identification device, terminal device and storage medium
CN110659587B (en) * 2019-09-02 2022-08-12 广东虚拟现实科技有限公司 Marker, marker identification method, marker identification device, terminal device and storage medium
CN110598605B (en) * 2019-09-02 2022-11-22 广东虚拟现实科技有限公司 Positioning method, positioning device, terminal equipment and storage medium
US11610330B2 (en) 2019-10-08 2023-03-21 Samsung Electronics Co., Ltd. Method and apparatus with pose tracking
CN110782492A (en) * 2019-10-08 2020-02-11 三星(中国)半导体有限公司 Pose tracking method and device
CN110782492B (en) * 2019-10-08 2023-03-28 三星(中国)半导体有限公司 Pose tracking method and device
CN111178127A (en) * 2019-11-20 2020-05-19 青岛小鸟看看科技有限公司 Method, apparatus, device and storage medium for displaying image of target object
CN111178127B (en) * 2019-11-20 2024-02-20 青岛小鸟看看科技有限公司 Method, device, equipment and storage medium for displaying image of target object
CN111145259A (en) * 2019-11-28 2020-05-12 上海联影智能医疗科技有限公司 System and method for automatic calibration
CN111145259B (en) * 2019-11-28 2024-03-08 上海联影智能医疗科技有限公司 System and method for automatic calibration
CN110956642A (en) * 2019-12-03 2020-04-03 深圳市未来感知科技有限公司 Multi-target tracking identification method, terminal and readable storage medium
CN112330747B (en) * 2020-09-25 2022-11-11 中国人民解放军军事科学院国防科技创新研究院 Multi-sensor combined detection and display method based on unmanned aerial vehicle platform
CN112330747A (en) * 2020-09-25 2021-02-05 中国人民解放军军事科学院国防科技创新研究院 Multi-sensor combined detection and display method based on unmanned aerial vehicle platform
CN112991461A (en) * 2021-03-11 2021-06-18 珠海格力智能装备有限公司 Material assembling method and device, computer readable storage medium and processor

Similar Documents

Publication Publication Date Title
CN110120099A (en) Localization method, device, recognition and tracking system and computer-readable medium
CN110119190A (en) Localization method, device, recognition and tracking system and computer-readable medium
CN108765498B (en) Monocular vision tracking, device and storage medium
CN108171673B (en) Image processing method and device, vehicle-mounted head-up display system and vehicle
CN106570904B (en) A kind of multiple target relative pose recognition methods based on Xtion camera
CN110443898A (en) A kind of AR intelligent terminal target identification system and method based on deep learning
CN113808160B (en) Sight direction tracking method and device
CN110119194A (en) Virtual scene processing method, device, interactive system, head-wearing display device, visual interactive device and computer-readable medium
CN103824298B (en) A kind of intelligent body 3 D visual positioner based on twin camera and method
CN106767810A (en) The indoor orientation method and system of a kind of WIFI and visual information based on mobile terminal
CN108537214B (en) Automatic construction method of indoor semantic map
CN108989794B (en) Virtual image information measuring method and system based on head-up display system
CN110782492B (en) Pose tracking method and device
CN108986129B (en) Calibration plate detection method
CN106952219B (en) Image generation method for correcting fisheye camera based on external parameters
CN207780718U (en) Visual interactive device
CN110443853A (en) Scaling method, device, terminal device and storage medium based on binocular camera
CN112365549B (en) Attitude correction method and device for vehicle-mounted camera, storage medium and electronic device
CN106886976B (en) Image generation method for correcting fisheye camera based on internal parameters
CN108022265A (en) Infrared camera pose determines method, equipment and system
CN111243003A (en) Vehicle-mounted binocular camera and method and device for detecting road height limiting rod
CN109753945A (en) Target subject recognition methods, device, storage medium and electronic equipment
CN110120100A (en) Image processing method, device and recognition and tracking system
CN111596594B (en) Panoramic big data application monitoring and control system
CN112051920B (en) Sight line falling point determining method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190813