CN107767424A - Scaling method, multicamera system and the terminal device of multicamera system - Google Patents
Scaling method, multicamera system and the terminal device of multicamera system Download PDFInfo
- Publication number
- CN107767424A CN107767424A CN201711046768.2A CN201711046768A CN107767424A CN 107767424 A CN107767424 A CN 107767424A CN 201711046768 A CN201711046768 A CN 201711046768A CN 107767424 A CN107767424 A CN 107767424A
- Authority
- CN
- China
- Prior art keywords
- camera
- multicamera system
- destination object
- image sequence
- preset requirement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Abstract
The application catches technical field suitable for optics is dynamic, there is provided a kind of scaling method of multicamera system, multicamera system and terminal device, including:Motion path is obtained, and control machine people moves according to the motion path in the pickup area of the multicamera system;The image sequence for the destination object installed in the robot is gathered by the multicamera system;The image sequence of the destination object gathered according to the multicamera system generates the position and direction of each camera in the multicamera system, and the position of each camera and shooting direction in multicamera system can efficiently, be accurately demarcated by this method.
Description
Technical field
The application belongs to that optics is dynamic to catch technical field, more particularly to a kind of scaling method of multicamera system, polyphaser system
System and terminal device.
Background technology
Multicamera system is to be based on principle of computer vision, and multiple cameras, light source, storage device etc. are combined into group
The system built, it is commonly applied to 3D reconstructions, motion-captured, multi-view point video etc..Such as optical profile type motion capture is namely based on calculating
Machine visual theory, the skill of motion capture is carried out by the monitoring to target signature point from different perspectives of multiple high speed cameras and tracking
Art.
However, to realize the motion capture to target signature point or destination object, it is necessary to obtain in multicamera system
The exact position of each camera for participating in shooting, at present, each camera in multicamera system is carried out to demarcate generally use people
The mode of work demarcation, precision is low, efficiency is low, and application space is limited.
The content of the invention
In view of this, the embodiment of the present application provides a kind of scaling method of multicamera system, multicamera system and terminal
Equipment, to solve the problems, such as that the stated accuracy of current multicamera system is low, efficiency is low, application space is limited.
The first aspect of the embodiment of the present application provides a kind of scaling method of multicamera system, including:
Obtain motion path, and control machine people according to the motion path in the pickup area of the multicamera system
Motion;
The image sequence for the destination object installed in the robot is gathered by the multicamera system;
The image sequence of the destination object gathered according to the multicamera system generates every in the multicamera system
The position and direction of individual camera.
The second aspect of the embodiment of the present application provides a kind of terminal device, including:
Acquisition module, for obtaining motion path, and control machine people according to the motion path in the polyphaser system
Moved in the pickup area of system;
Multiple cameras, for gathering the image sequence for the destination object installed in the robot;
Information generating module is demarcated, for the image sequence generation of the destination object gathered according to the multiple camera
The position and direction of each camera in the multiple camera.
The third aspect of the embodiment of the present application provides a kind of terminal device, including memory, processor and is stored in
In the memory and the computer program that can run on the processor, described in the computing device during computer program
The step of realizing the methods described that first aspect of the embodiment of the present invention provides.
The fourth aspect of the embodiment of the present application provides a kind of computer-readable recording medium, the computer-readable storage
Media storage has computer program, and the computer program realizes the embodiment of the present invention when being executed by one or more processors
On the one hand the step of methods described provided.
5th aspect of the embodiment of the present application provides a kind of computer program product, and the computer program product includes
Computer program, the computer program realize that first aspect of the embodiment of the present invention provides when being executed by one or more processors
Methods described the step of.
The embodiment of the present application obtain motion path, and control machine people according to the motion path in the multicamera system
Pickup area in move;The image sequence for the destination object installed in the robot is gathered by the multicamera system;
The image sequence of the destination object gathered according to the multicamera system generates each camera in the multicamera system
Position and direction, so as to efficiently, accurately obtain the position of each camera and shooting direction in the multicamera system.
Brief description of the drawings
, below will be to embodiment or description of the prior art in order to illustrate more clearly of the technical scheme in the embodiment of the present application
In the required accompanying drawing used be briefly described, it should be apparent that, drawings in the following description be only the present invention some
Embodiment, for those of ordinary skill in the art, without having to pay creative labor, can also be according to these
Accompanying drawing obtains other accompanying drawings.
Fig. 1 is the implementation process schematic diagram of the scaling method for the multicamera system that the embodiment of the application one provides;
Fig. 2-a are the arrangement modes of witch ball in the destination object that the embodiment of the application one provides;
Fig. 2-b are the arrangement modes of witch ball in the destination object that another embodiment of the application provides;
Fig. 3-a are the motion path top views for the destination object that the embodiment of the application one provides;
Fig. 3-b are the motion path top views for the destination object that another embodiment of the application provides;
Fig. 4 is the implementation process schematic diagram of the scaling method for the multicamera system that the another embodiment of the application provides;
Fig. 5 is the schematic block diagram for the multicamera system that the embodiment of the application one provides;
Fig. 6 is the schematic block diagram for the terminal device that the embodiment of the application one provides.
Embodiment
In describing below, in order to illustrate rather than in order to limit, it is proposed that such as tool of particular system structure, technology etc
Body details, thoroughly to understand the embodiment of the present application.However, it will be clear to one skilled in the art that there is no these specific
The application can also be realized in the other embodiments of details.In other situations, omit to well-known system, device, electricity
Road and the detailed description of method, in case unnecessary details hinders the description of the present application.
It should be appreciated that ought be in this specification and in the appended claims in use, special described by the instruction of term " comprising "
Sign, entirety, step, operation, the presence of element and/or component, but be not precluded from one or more of the other feature, entirety, step,
Operation, element, component and/or its presence or addition for gathering.
It is also understood that the term used in this present specification is merely for the sake of the mesh for describing specific embodiment
And be not intended to limit the application.As used in present specification and appended claims, unless on
Other situations are hereafter clearly indicated, otherwise " one " of singulative, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in present specification and appended claims is
Refer to any combinations of one or more of the associated item listed and be possible to combine, and including these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determining " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if detecting [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to true
It is fixed " or " once detecting [described condition or event] " or " in response to detecting [described condition or event] ".
In order to illustrate technical scheme described herein, illustrated below by specific embodiment.
Before specific embodiment is introduced, introduce how multicamera system catches to destination object progress optics action first
Catch.First, multiple cameras are fixed in the diverse location of moving region of destination object, the moving region is a three-dimensional
Area of space.Then, the image sequence of the destination object in moving region is gathered by the camera of diverse location, finally combined more
The image sequence for the destination object that position, lens direction (shooting direction) and the multiple camera synchronizations of individual camera gather is just
Positional information of the destination object in the moving region can be obtained.Due to being needed when calculating the positional information of destination object
The position of camera and shooting direction, thus accurately obtain position and the shooting direction of multiple cameras optics action
In an important problem.Currently, when obtaining the position of multiple cameras and shooting direction, conventional mode is to open
Begin before the exercise data of collection destination object, multiple cameras in moving region are demarcated using manual type.However,
Demarcated that there is the problems such as efficiency is low, application space is limited using manual type.
Fig. 1 is the implementation process schematic diagram of the scaling method for the multicamera system that the embodiment of the application one provides, as schemed institute
Show that this method may comprise steps of:
Step S101, acquisition motion path, and control machine people are according to the motion path in the multicamera system
Moved in pickup area.
In the embodiment of the present application, destination object can be arranged to at least three witch balls, and arrangement form at least one
The distance between bar straight line, any two witch ball differs, as shown in Figure 2.Destination object is arranged in robot, machine
Device people drives destination object to be moved in moving region, and the moving region of destination object is referred to as the collection of multicamera system
Region.
It is illustrated with how destination object being arranged in robot, if the destination object is reflective including three
Ball, can be arranged in a straight line by three witch balls, that is, three witch balls central point point-blank, it is assumed that three are successively
The witch ball of arrangement is respectively witch ball 1, witch ball 2, witch ball 3, then the distance between witch ball 1 and witch ball 2, witch ball
The distance between 2 and witch ball 3 can not be identical.Certainly, in actual applications, the destination object can also include more anti-
Photosphere, by taking 5 witch balls as an example, 5 witch balls are arranged in a straight line to form two straight lines, can be by witch ball 1, witch ball 2, reflective
Ball 3 is arranged with three witch ball way of example in alignment according to above-mentioned, and witch ball 4, witch ball 5 and witch ball 2 form
Straight line, in witch ball 1, witch ball 2, witch ball 3, witch ball 4, witch ball 5, between the adjacent witch ball of any two
Distance differs, and it can be any one arrangement mode for meeting foregoing description that 5 witch balls, which are combined, such as can group
Into the destination object of T fonts, specifically as shown in Fig. 2-a, cross can also be formed, specifically as shown in Fig. 2-b.Certainly, it is reflective
Ball can also form other shapes, and differ a citing herein.
When robot moves in moving region, in order to cause the destination object moving region as much as possible that appears in
Diverse location, member of translational can also be set on the robot body, member of translational can be realized relative to robot body
Translational motion (such as horizontal direction translation, vertical direction translation, comprehensive translation), if being provided with the robot body flat
Part is moved, the destination object can be arranged on the member of translational.Certainly, rotation can also be set on the robot body
Part, the rotary part can be realized relative to robot body to be rotated, if being also provided with rotary part on robot body,
Then need the destination object being arranged on rotary part, by member of translational and rotary part, the destination object can be with
Realize the translational and rotational movement relative to robot body.If robot body moves in moving region, pass through translation
Part and rotary part can, which enable the destination object to travel through whole moving region, (can travel through multicamera system
Whole pickup area).
Wherein, the motion path of acquisition refers to motion path information of the robot in moving region, certainly, actual
In include the motion of member of translational and rotary part.By motion path of the robot body in moving region, translation
What the motion of part and rotary part was combined together expression is the motion path of the destination object.It is also understood that obtain
What is taken is the motion path of the destination object, by robot body in the motion of moving region, member of translational and rotating part
The motion of part realizes the pickup area that destination object travels through multicamera system according to the motion path.So robot according to
The motion path moves the content for including three aspects in the pickup area of the multicamera system:Robot body is being transported
The dynamic motion in region, the motion of member of translational, the motion of rotary part.In actual applications, the target can be directly set
The motion path of object, by control machine human body and member of translational, rotary part destination object according to motion road
Move in footpath;The motion path of robot body, member of translational, rotary part can also be set, and direct control machine people is according to machine
Device human body, member of translational, the movement path of rotary part.The setting of motion path follows:Traversal as much as possible is more
The pickup area of camera system.
It is understood that to ensure that destination object being capable of traversal pickup area as much as possible, it is desirable to motion path
The multiple Different Planes motion for enabling to destination object in pickup area is set.Specifically, default motion path can
To be curved path, the e.g. curved path of disk ring, as shown in Fig. 3-a.In Fig. 3-a, all stain 31a represent camera,
32a represents motion path.Can certainly be the shape of sine wave or cosine wave, as shown in Fig. 3-b.In Fig. 3-b, all is black
Point 31b represents camera, and 32b represents motion path.It is only used for illustrating above, is not intended to limit the invention.
Step S102, the image sequence for the destination object installed in the robot is gathered by the multicamera system.
In the embodiment of the present application, robot transports according to the motion path in the pickup area of the multicamera system
While dynamic, the figure for the destination object installed in the robot can be gathered by multiple cameras in the multicamera system
As sequence.Multiple cameras in multicamera system are when gathering image sequence, it is desirable to ensure each camera in multicamera system
Synchronous exposure, also just has at least two cameras to collect the destination object in synchronization.It is understood that polyphaser
In system, the model of multiple cameras can be with identical, can also be different.
Step S103, the image sequence of the destination object gathered according to the multicamera system generate the polyphaser
The position and direction of each camera in system.
In the embodiment of the present application, the position and direction of each camera are generated according to the image sequence of the destination object of collection
Process, can substantially be divided into two steps:First, according to image sequence, find match point;Second, according to searched out
With point, the anti-position and direction for pushing away camera.For example, if destination object includes three witch balls, then in the first step, i.e. root
, can be according to two of witch ball in multiple images that multiple camera synchronizations collect when finding match point according to image sequence
Dimension coordinate, determine which two-dimensional coordinate belongs to same witch ball in two-dimensional coordinate.Determine match point and then the sequence according to camera
Number and the two-dimensional coordinate of match point determined, you can the anti-relative position released between camera and direction.
The embodiment of the present application obtain motion path, and control machine people according to the motion path in the multicamera system
Pickup area in move;The image sequence for the destination object installed in the robot is gathered by the multicamera system;
Each phase in the multicamera system is generated according to the image sequence of the destination object of the multicamera system synchronous acquisition
The position and direction of machine, by the member of translational in robot body, robot and rotary part drive destination object move into
The fixed mode of rower, it can ensure that destination object travels through the four corner of the pickup area as far as possible, therefore can efficiently, accurately
The acquisition multicamera system in each camera position and shooting direction.And multicamera system is carried out using robot
The method of demarcation, the space size of pickup area is not limited, there is good applicability.
Fig. 4 is the schematic flow sheet of the scaling method for the multicamera system that the another embodiment of the application provides, as shown in the figure
This method may comprise steps of:
Step S401, the environmental information of the pickup area is gathered by the sensor in the robot, and
The three-dimensional spatial information of the pickup area is obtained based on the environmental information.
In the embodiment of the present application, the environment of robot moving region described in motion pick in moving region can be passed through
Motion path is generated after information, can not installation targets pair in robot during motion path is generated by robot
As, but the install sensor in robot is needed in order to collect environmental information, sensor can be that RGBD cameras are (actual
Upper acquisition is two images, and one is common RGB Three Channel Color images, and another is Depth depth images), may be used also
To be infrared depth transducer, the environmental data collected is two dimensional image and depth information, and robot moves in moving region
When can gather environmental information always, passed through according to depth information corresponding to multiple two dimensional images of acquisition and each two dimensional image
Multiple view geometry principle, it is possible to obtain the cloud data of moving region image, be then filtered, splice, match somebody with somebody to cloud data
The three dimensions point cloud chart of moving region is obtained after standard.
Step S402, the two-dimensional map of the pickup area is generated according to the three-dimensional spatial information of the pickup area.
In the embodiment of the present application, after the three dimensions point cloud chart of pickup area of the multicamera system is obtained, lead to
The two-dimensional map of pickup area of robot motion region or multicamera system can be obtained by crossing rectangular projection.Robot body
Motion in three dimensions is planned according to this two-dimensional map.
Step S403, predetermined movement coordinates measurement rule is obtained, and according to the two-dimensional map and the predetermined movement
Coordinates measurement rule generation motion path.
In the embodiment of the present application, the reference factor of the predetermined movement coordinates measurement rule includes:The shape of robot,
Obstacle Position in size, movement velocity, moving region size and moving region.User can be according to the motion path
The reference factor of create-rule sets the motion path create-rule, for example, robot can be pre-set in moving region
During interior motion before barrier avoiding barrier in certain distance, pre-set movement velocity of robot etc..The motion road
Footpath includes robot body, member of translational, the motion path of rotary part.Motion path for robot body can be institute
State motion path (such as direction, speed, distance) of the robot body in moving region, the motion path of the member of translational
Can be motion path of the member of translational in the horizontal direction, on vertical direction (for example, member of translational is relative to robot sheet
After body horizontal direction is with 1m/s speed motion 0.5m, vertical direction moves 0.7 meter with 0.5m/s speed), the rotating part
The motion path of part can be the anglec of rotation of the rotary part, speed of rotation etc..
Step S401 to step S403 is in the moving region by robot equivalent to the process for obtaining motion path
The environmental information of moving region described in interior motion pick, the two-dimensional map then generated, planning life is carried out according to two-dimensional map again
Into the process of motion path.In practical application, motion path can be pre-set by user, can also be by robot in motor area
In domain motion path is generated after the environmental information of moving region described in motion pick.
Step S404, control machine people move according to the motion path in the pickup area of the multicamera system.
The step is identical with step S102, specifically can refer to step S102 description, will not be repeated here.
Step S405, generated according to the image sequence of the destination object of the multicamera system synchronous acquisition described more
The position and direction of each camera in camera system.
, can be according in a upper embodiment after the image sequence for the destination object installed on the robot is gathered
Description, the image sequence of the destination object gathered according to the multicamera system generate each phase in the multicamera system
The position and direction of machine, so as to reach the purpose for realizing multicamera system camera calibration.
However, in actual applications, the motion path that either user is set, or robot pass through scanning circumstance information
The motion path generated according to predetermined movement path rule, the destination object unavoidably all be present can not travel through multicamera system
The problem of pickup area.Further to improve stated accuracy, reduce the undesirable probability of calibration result, control machine people according to
After preset path motion finishes, the image sequence generation of the destination object gathered according to the multicamera system is described more
In camera system before the position and direction of each camera, i.e., between step S404 and step S405, it can also include:
Step S406, judges whether the image sequence of each camera collection in the multicamera system meets preset requirement.
In the embodiment of the present application, the premise of the position and direction of camera is each camera in Accurate Calibration multicamera system
The image sequence of collection must is fulfilled for certain condition, so after robot terminates according to preset path motion, in root
The position of each camera in the multicamera system is generated according to the image sequence of the destination object of multicamera system collection
Put and whether meet preset requirement with the image sequence that each camera collection in multicamera system before direction, can also be judged.Increase
The purpose for adding judgment mechanism is to improve the stated accuracy during multicamera system camera calibration.
, can be by various ways when specifically performing this step, such as can include:
Mode one, i.e., judge whether camera meets preset requirement by frame number standard, specific implementation is as follows:
Obtain the number of image frames for the complete object object that each camera collects;Judge each phase in the multicamera system
Whether the number of image frames that machine collects complete object object is more than default frame number, if the determination result is YES, it is determined that meets default
It is required that if judged result is no, it is determined that be unsatisfactory for preset requirement;If destination object is 3 photospheres, complete object object
Picture frame refers to the picture frame for including 3 photospheres.
Mode two, i.e., judge whether camera meets preset requirement by coverage rate standard, specific implementation is as follows:Obtain
The number of image frames for the complete object object that each camera collects, and the number of image frames comprising complete object object of acquisition is whole
Close;Judge whether coverage rate of the destination object after integrating in every image is more than preset standard, if the determination result is YES, then
It is determined that meet preset requirement, if judged result is no, it is determined that be unsatisfactory for preset requirement.If destination object is 3 photospheres, complete
The picture frame of whole destination object refers to the picture frame for including 3 photospheres.
The manner implements process:
The number of image frames for including complete object object that each camera collects is obtained first, then having included acquisition
The image of whole destination object is integrated according to frame number.Because the size of image is fixed, therefore can calculate whole
Coverage rate of the destination object in every camera imaging plane after conjunction is (i.e.:The image coordinate intersection of witch ball is in every camera
Occupation rate on imaging plane).If the coverage rate calculated is more than default coverage rate standard, then it is assumed that camera meets default want
Ask;If the destination object after integrating is less than or equal to the default covering in the coverage rate of at least one camera imaging plane
Rate, then it is assumed that camera is unsatisfactory for preset requirement.Specifically, in the picture frame to including complete object object (all witch balls)
When being integrated, the two-dimensional coordinate of witch ball can be counted, to obtain the total degree that witch ball occurs in the picture.
On the premise of total degree meets given threshold, multiple net regions are also divided an image into (to ensure the precision of images, in division net
During lattice, each net region ensures that than one pixel is small), then count witch ball in each net region respectively goes out occurrence
Number, and calculate the average value of occurrence number.Then the occurrence number and meter of witch ball in each net region statistics obtained
The average value calculated is compared;If difference therebetween is in default error range, then it is assumed that camera meets preset requirement,
If difference therebetween is not in default error range, then it is assumed that camera does not meet preset requirement.
Certainly, in the embodiment of the present application, it can be combined with frame number standard and coverage rate standard judge the polyphaser system
Whether the image sequence of each camera collection meets preset requirement in system.
Mode three, i.e., judge whether camera meets preset requirement, specific implementation side with reference to frame number standard and coverage rate standard
Formula is as follows:
Obtain the number of image frames for the complete object object that each camera collects;Judge each phase in the multicamera system
Whether the number of image frames that machine collects complete object object is more than default frame number;If judging, number of image frames is less than or equal to default frame
Number, it is determined that do not meet preset requirement, number of image frames is more than default frame number if judging, acquisition is included into complete object object
Number of image frames integrate, and determine whether coverage rate of the destination object in every image after integrating is more than pre- bidding
Standard, if the determination result is YES, it is determined that meet preset requirement;If judged result is no, it is determined that is unsatisfactory for preset requirement.
Step S407, if the image sequence of at least one camera collection is unsatisfactory for preset requirement in the multicamera system,
The image sequence of the destination object is then resurveyed by being unsatisfactory for the camera of preset requirement, default want is unsatisfactory for until all
The camera asked collects the image sequence for meeting preset requirement.
In the embodiment of the present application, if the image sequence that any one or multiple cameras gather is unsatisfactory for preset requirement,
Needs are resurveyed by being unsatisfactory for the camera of preset requirement to the image sequence of destination object.
Specifically, when the camera by being unsatisfactory for preset requirement resurveys the image sequence of the destination object,
Mode of operation can specifically include:
Predetermined movement road corresponding to robot when obtaining the camera supplement collection image sequence for being unsatisfactory for preset requirement
Footpath, and control the robot to be moved according to the predetermined movement path;The predetermined movement path is to be unsatisfactory for according to
The motion path of the robot of the pickup area planning of the camera of preset requirement;And by described in multicamera system collection
The image sequence for the destination object installed in robot, so that the camera for being unsatisfactory for preset requirement collects and meets preset requirement
The image sequence of the destination object.
In the embodiment of the present application, there is provided predetermined movement corresponding to robot during the progress supplemental image collection of each camera
Path, the predetermined movement path shoot dedicated for each camera to the supplement of destination object.Because predetermined movement path is
Planned according to the pickup area of the camera for being unsatisfactory for preset requirement, therefore the corresponding robot set for each camera
Predetermined movement path is concentrated mainly in the pickup area of the camera, can so cause the image sequence of camera supplement shooting
Meet preset requirement.
Step S408, if the image sequence of each camera collection is satisfied by preset requirement in the multicamera system, hold
Row step S405.
In the embodiment of the present application, if the image sequence of collection meets preset requirement for the first time, then it represents that each camera
Need not all supplement shooting be carried out to destination object, now can be directly according to the mesh of the multicamera system synchronous acquisition
The image sequence for marking object generates the position and direction of each camera in the multicamera system, if the figure of collection for the first time
As there is camera not meet preset requirement in sequence, then need to supplement destination object by not meeting the camera of preset requirement
Shooting, if the image sequence for supplementing shooting meets to require, institute can be generated according to the image sequence for the first time with supplement shooting
The position and direction of each camera in multicamera system are stated, if the image sequence of supplement shooting is still unsatisfactory for requiring, are continued
Supplement shooting is satisfied by preset requirement until the image sequence that all cameras are shot, it is possible to same according to the multicamera system
The image sequence for walking the destination object of collection generates the position and direction of each camera in the multicamera system.
The embodiment of the present application is added to move in moving region by robot on the basis of the embodiment shown in Fig. 1 and given birth to
Into the process of motion path, automaticity is high, efficiency high.In addition, it also add when the image of the destination object of camera collection
When sequence does not meet preset requirement, the camera by not meeting preset requirement supplements destination object the process of shooting, this method
The demarcation efficiency of multicamera system can be improved, and due to having carried out supplement shooting, the stated accuracy of camera can be lifted.
It should be understood that the size of the sequence number of each step is not meant to the priority of execution sequence, each process in above-described embodiment
Execution sequence should determine that the implementation process without tackling the embodiment of the present application forms any limit with its function and internal logic
It is fixed.
Fig. 5 is the schematic block diagram for the multicamera system that the embodiment of the application one provides, and for convenience of description, is only shown and this
Apply for the related part of embodiment.
The multicamera system 5 includes:
Acquisition module 51, for obtaining motion path, and control machine people according to the motion path information described more
Moved in the pickup area of camera system;
Multiple cameras 52, for gathering the image sequence for the destination object installed in the robot;
Information generating module 53 is demarcated, for the image sequence life of the destination object gathered according to the multiple camera
The position and direction of each camera into the multiple camera.
Optionally, the acquisition module 51 includes:
Three-dimensional information generation unit 511, for gathering the acquisition zone by the sensor in the robot
The environmental information in domain, and based on the three-dimensional spatial information of the environmental information acquisition pickup area;
Two-dimensional map generation unit 512, for generating the acquisition zone according to the three-dimensional spatial information of the pickup area
The two-dimensional map in domain;
Motion path information generating unit 513, for obtain predetermined movement coordinates measurement rule, and according to it is described two-dimensionally
Figure and predetermined movement coordinates measurement rule generation motion path.
Optionally, the multicamera system 5 also includes:
Judge module 54, for judging whether the image sequence of each camera collection in the multicamera system meets to preset
It is required that;
Taking module 55 is supplemented, if the image sequence at least one camera collection in the multicamera system is unsatisfactory for
Preset requirement, then the image sequence of the destination object is resurveyed by being unsatisfactory for the camera of preset requirement;
The demarcation information generating module 53, if being additionally operable to the image sequence of each camera collection in the multicamera system
Preset requirement is satisfied by, then the image sequence of the destination object gathered according to the multicamera system generates the polyphaser
The position and direction of each camera in system.
Optionally, the judge module 54 is specifically used for:
Obtain the number of image frames for the complete object object that each camera collects;Judge each phase in the multicamera system
Whether the number of image frames that machine collects complete object object is more than default frame number;If the determination result is YES, it is determined that meet default
It is required that if judged result is no, it is determined that be unsatisfactory for preset requirement;
Or, the number of image frames for the complete object object that each camera collects is obtained, and acquisition is included into complete object
The number of image frames of object is integrated;Judge that coverage rate of the destination object after integrating in every image is more than preset standard, if sentencing
Disconnected result is yes, it is determined that meets preset requirement, if judged result is no, it is determined that be unsatisfactory for preset requirement;
Or, obtain the number of image frames for the complete object object that each camera collects;Judge every in the multicamera system
Whether the number of image frames that individual camera collects complete object object is more than default frame number;If it is pre- to judge that number of image frames is less than or equal to
If frame number, it is determined that do not meet preset requirement;If judging, number of image frames is more than default frame number, and acquisition is included into complete object
The number of image frames of object is integrated, and it is pre- to determine whether coverage rate of the destination object after integrating in every image is more than
Bidding is accurate;If the determination result is YES, it is determined that meet preset requirement;If judged result is no, it is determined that is unsatisfactory for preset requirement.
Optionally, the supplement taking module 55 includes:
Control unit, it is corresponding that the robot during camera supplement collection image sequence of preset requirement is unsatisfactory for for obtaining
Predetermined movement path, and control the robot to be moved according to the predetermined movement path, the predetermined movement path is root
The motion path of the robot planned according to the pickup area of the camera for being unsatisfactory for preset requirement;
Shooting unit is supplemented, for gathering the figure for the destination object installed in the robot by the multicamera system
As sequence.
Optionally, the robot is provided with member of translational and rotary part, and the destination object is arranged on the rotating part
On part.
Optionally, the destination object includes at least three witch balls and the straight line of arrangement form at least one, all reflective
The distance between any two witch ball differs in ball.
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each work(
Can module division progress for example, in practical application, can be as needed and by above-mentioned function distribution by different functions
Module is completed, will the internal structure of the multicamera system be divided into different functional modules, it is described above complete to complete
Portion or partial function.Each functional module in embodiment can be integrated in a processing module or modules
Individually be physically present, can also two or more modules be integrated in a module, above-mentioned integrated module can both adopt
Realized, can also be realized in the form of SFU software functional unit with the form of hardware.In addition, the specific name of each functional module
Only to facilitate mutually distinguishing, the protection domain of the application is not limited to.Module is specific in above-mentioned multicamera system
The course of work, the corresponding process in preceding method embodiment is may be referred to, will not be repeated here.
Fig. 6 is the schematic block diagram for the terminal device that the embodiment of the application one provides.As shown in fig. 6, the terminal of the embodiment
Equipment 6 includes:One or more processors 60, memory 61 and it is stored in the memory 61 and can be in the processor
The computer program 62 run on 60.The processor 60 realizes above-mentioned each polyphaser system when performing the computer program 62
Step in the scaling method embodiment of system, such as the step S101 to S103 shown in Fig. 1.Or the processor 60 performs
The function of each module/unit in above-mentioned multicamera system embodiment, such as module shown in Fig. 5 are realized during the computer program 62
51 to 53 function.
Exemplary, the computer program 62 can be divided into one or more module/units, it is one or
Multiple module/units are stored in the memory 61, and are performed by the processor 60, to complete the application.Described one
Individual or multiple module/units can be the series of computation machine programmed instruction section that can complete specific function, and the instruction segment is used for
Implementation procedure of the computer program 62 in the terminal device 6 is described.For example, the computer program 62 can be divided
It is cut into acquisition module, acquisition module, demarcation information generating module.
The acquisition module, for obtaining motion path, and control machine people according to the motion path in the multiphase
Moved in the pickup area of machine system;
The acquisition module, for gathering the figure for the destination object installed in the robot by the multicamera system
As sequence;
The demarcation information generating module, for the image sequence of the destination object gathered according to the multicamera system
The position and direction of each camera in multicamera system described in column-generation.
Other modules or unit can refer to the description in the embodiment shown in Fig. 5, will not be repeated here.
The terminal device includes but are not limited to processor 60, memory 61.It will be understood by those skilled in the art that figure
6 be only an example of terminal device 6, does not form the restriction to terminal device 6, can be included more more or less than illustrating
Part, either combine some parts or different parts, such as the terminal device can also include input equipment, defeated
Go out equipment, network access equipment, bus etc..
The processor 60 can be CPU (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other PLDs, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor can also be any conventional processor
Deng.
The memory 61 can be the internal storage unit of the terminal device 6, such as the hard disk of terminal device 6 or interior
Deposit.The memory 61 can also be the External memory equipment of the terminal device 6, such as be equipped with the terminal device 6
Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, dodge
Deposit card (Flash Card) etc..Further, the memory 61 can also both include the storage inside list of the terminal device 6
Member also includes External memory equipment.The memory 61 is used to store needed for the computer program and the terminal device
Other programs and data.The memory 61 can be also used for temporarily storing the data that has exported or will export.
In the above-described embodiments, the description to each embodiment all emphasizes particularly on different fields, and is not described in detail or remembers in some embodiment
The part of load, it may refer to the associated description of other embodiments.
Those of ordinary skill in the art are it is to be appreciated that the list of each example described with reference to the embodiments described herein
Member and algorithm steps, it can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
Performed with hardware or software mode, application-specific and design constraint depending on technical scheme.Professional and technical personnel
Described function can be realized using distinct methods to each specific application, but this realization is it is not considered that exceed
Scope of the present application.
In embodiment provided herein, it should be understood that disclosed terminal device and method, it can be passed through
Its mode is realized.For example, terminal device embodiment described above is only schematical, for example, the module or list
The division of member, only a kind of division of logic function can have an other dividing mode when actually realizing, for example, multiple units or
Component can combine or be desirably integrated into another system, or some features can be ignored, or not perform.It is another, show
Show or the mutual coupling discussed or direct-coupling or communication connection can be by some interfaces, between device or unit
Coupling or communication connection are connect, can be electrical, mechanical or other forms.
The unit illustrated as separating component can be or may not be physically separate, show as unit
The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each functional unit in each embodiment of the application can be integrated in a processing unit, can also
That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list
Member can both be realized in the form of hardware, can also be realized in the form of SFU software functional unit.
If the integrated module/unit realized in the form of SFU software functional unit and as independent production marketing or
In use, it can be stored in a computer read/write memory medium.Based on such understanding, the application realizes above-mentioned implementation
All or part of flow in example method, by computer program the hardware of correlation can also be instructed to complete, described meter
Calculation machine program can be stored in a computer-readable recording medium, and the computer program can be achieved when being executed by processor
The step of stating each embodiment of the method.Wherein, the computer program includes computer program code, the computer program generation
Code can be source code form, object identification code form, executable file or some intermediate forms etc..The computer-readable medium
It can include:Any entity or device, recording medium, USB flash disk, mobile hard disk, the magnetic of the computer program code can be carried
Dish, CD, computer storage, read-only storage (ROM, Read-Only Memory), random access memory (RAM,
Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It is it should be noted that described
The content that computer-readable medium includes can carry out appropriate increasing according to legislation in jurisdiction and the requirement of patent practice
Subtract, such as in some jurisdictions, according to legislation and patent practice, computer-readable medium do not include be electric carrier signal and
Telecommunication signal.
Embodiment described above is only to illustrate the technical scheme of the application, rather than its limitations;Although with reference to foregoing reality
Example is applied the application is described in detail, it will be understood by those within the art that:It still can be to foregoing each
Technical scheme described in embodiment is modified, or carries out equivalent substitution to which part technical characteristic;And these are changed
Or replace, the essence of appropriate technical solution is departed from the spirit and scope of each embodiment technical scheme of the application, all should
Within the protection domain of the application.
Claims (10)
- A kind of 1. scaling method of multicamera system, it is characterised in that including:Motion path is obtained, and control machine people transports according to the motion path in the pickup area of the multicamera system It is dynamic;The image sequence for the destination object installed in the robot is gathered by the multicamera system;The image sequence of the destination object gathered according to the multicamera system generates each phase in the multicamera system The position and direction of machine.
- 2. the scaling method of multicamera system as claimed in claim 1, it is characterised in that the acquisition motion path includes:The environmental information of the pickup area is gathered by the sensor in the robot, and is believed based on the environment Breath obtains the three-dimensional spatial information of the pickup area;The two-dimensional map of the pickup area is generated according to the three-dimensional spatial information of the pickup area;Predetermined movement coordinates measurement rule is obtained, and it is raw according to the two-dimensional map and predetermined movement coordinates measurement rule Into motion path.
- 3. the scaling method of multicamera system as claimed in claim 1, it is characterised in that pass through the polyphaser system described After system gathers the image sequence for the destination object installed in the robot, according to the mesh of multicamera system collection The image sequence for marking object is generated in the multicamera system before the position and direction of each camera, and methods described also includes:Judge whether the image sequence of each camera collection in the multicamera system meets preset requirement;If the image sequence of each camera collection is satisfied by preset requirement in the multicamera system, according to the polyphaser system The image sequence of the destination object of system collection generates the position and direction of each camera in the multicamera system;If the image sequence of at least one camera collection is unsatisfactory for preset requirement in the multicamera system, pre- by being unsatisfactory for If it is required that camera resurvey the image sequence of the destination object, until all cameras for being unsatisfactory for preset requirement collect Meet the image sequence of preset requirement, then perform the figure of the destination object gathered according to the multicamera system again The step of position and direction of each camera in the multicamera system being generated as sequence.
- 4. the scaling method of multicamera system as claimed in claim 3, it is characterised in that described to judge the multicamera system In the image sequence of each camera collection whether meet that preset requirement includes:Obtain the number of image frames for the complete object object that each camera collects;Judge that each camera is adopted in the multicamera system Whether the number of image frames for collecting complete object object is more than default frame number;If the determination result is YES, it is determined that meet preset requirement, If judged result is no, it is determined that is unsatisfactory for preset requirement;Or, the number of image frames for the complete object object that each camera collects is obtained, and acquisition is included into complete object object Number of image frames integrate;Judge whether coverage rate of the destination object after integrating in every image is more than preset standard, if sentencing Disconnected result is yes, it is determined that meets preset requirement, if judged result is no, it is determined that be unsatisfactory for preset requirement;Or, obtain the number of image frames for the complete object object that each camera collects;Judge each phase in the multicamera system Whether the number of image frames that machine collects complete object object is more than default frame number;If judging, number of image frames is less than or equal to default frame Number, it is determined that do not meet preset requirement;If judging, number of image frames is more than default frame number, and acquisition is included into complete object object Number of image frames integrate, and judge whether coverage rate of the destination object in every image after integrating is more than preset standard;If Judged result is yes, it is determined that meets preset requirement;If judged result is no, it is determined that is unsatisfactory for preset requirement.
- 5. the scaling method of multicamera system as claimed in claim 3, it is characterised in that described by being unsatisfactory for preset requirement Camera resurvey the image sequence of the destination object, including:Predetermined movement path corresponding to robot when obtaining the camera supplement collection image sequence for being unsatisfactory for preset requirement, and The robot is controlled to be moved according to the predetermined movement path;The predetermined movement path is that default want is unsatisfactory for according to The motion path of the robot of the pickup area planning for the camera asked;The image sequence for the destination object installed in the robot is gathered by the multicamera system, so as to be unsatisfactory for presetting It is required that camera collect the image sequence of the destination object for meeting preset requirement.
- 6. the scaling method of the multicamera system as described in any one of claim 1 to 5, it is characterised in that the robot is set There are member of translational and rotary part, the destination object is arranged on the rotary part.
- 7. the scaling method of multicamera system as claimed in claim 6, it is characterised in that the destination object includes at least three Individual witch ball and the straight line of arrangement form at least one, the distance between any two witch ball differs in all witch balls.
- A kind of 8. multicamera system, it is characterised in that including:Acquisition module, for obtaining motion path, and control machine people according to the motion path in the multicamera system Moved in pickup area;Multiple cameras, for gathering the image sequence for the destination object installed in the robot;Information generating module is demarcated, for described in the image sequence generation of the destination object gathered according to the multiple camera The position and direction of each camera in multiple cameras.
- 9. a kind of terminal device, including memory, processor and it is stored in the memory and can be on the processor The computer program of operation, it is characterised in that realize such as claim 1 to 7 described in the computing device during computer program The step of any one methods described.
- 10. a kind of computer-readable recording medium, it is characterised in that the computer-readable recording medium storage has computer journey Sequence, realized when the computer program is executed by one or more processors such as the step of any one of claim 1 to 7 methods described Suddenly.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711046768.2A CN107767424A (en) | 2017-10-31 | 2017-10-31 | Scaling method, multicamera system and the terminal device of multicamera system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711046768.2A CN107767424A (en) | 2017-10-31 | 2017-10-31 | Scaling method, multicamera system and the terminal device of multicamera system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107767424A true CN107767424A (en) | 2018-03-06 |
Family
ID=61271958
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711046768.2A Pending CN107767424A (en) | 2017-10-31 | 2017-10-31 | Scaling method, multicamera system and the terminal device of multicamera system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107767424A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019196192A1 (en) * | 2018-04-11 | 2019-10-17 | 深圳市瑞立视多媒体科技有限公司 | Capture ball-based sphere distribution method, motion attitude identification method and system, and device |
CN111121743A (en) * | 2018-10-30 | 2020-05-08 | 阿里巴巴集团控股有限公司 | Position calibration method and device and electronic equipment |
CN111829531A (en) * | 2019-04-15 | 2020-10-27 | 北京京东尚科信息技术有限公司 | Two-dimensional map construction method and device, robot positioning system and storage medium |
CN112215899A (en) * | 2020-09-18 | 2021-01-12 | 深圳市瑞立视多媒体科技有限公司 | Frame data online processing method and device and computer equipment |
US10957074B2 (en) | 2019-01-29 | 2021-03-23 | Microsoft Technology Licensing, Llc | Calibrating cameras using human skeleton |
CN112613469A (en) * | 2020-12-30 | 2021-04-06 | 深圳市优必选科技股份有限公司 | Motion control method of target object and related equipment |
WO2021129791A1 (en) * | 2019-12-27 | 2021-07-01 | 深圳市瑞立视多媒体科技有限公司 | Multi-camera calibration method in large-space environment based on optical motion capture, and related device |
CN113449623A (en) * | 2021-06-21 | 2021-09-28 | 浙江康旭科技有限公司 | Light living body detection method based on deep learning |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101226638A (en) * | 2007-01-18 | 2008-07-23 | 中国科学院自动化研究所 | Method and apparatus for standardization of multiple camera system |
CN103279949A (en) * | 2013-05-09 | 2013-09-04 | 浙江大学 | Operation method of self-positioning robot-based multi-camera parameter automatic calibration system |
CN106363304A (en) * | 2016-08-19 | 2017-02-01 | 武汉华工激光工程有限责任公司 | Multi-camera correcting and positioning method and glass laser cutting device |
CN106598052A (en) * | 2016-12-14 | 2017-04-26 | 南京阿凡达机器人科技有限公司 | Robot security inspection method based on environment map and robot thereof |
CN106780624A (en) * | 2016-12-14 | 2017-05-31 | 广东工业大学 | A kind of polyphaser scaling method and device based on object of reference |
CN106780625A (en) * | 2016-12-19 | 2017-05-31 | 南京天祥智能设备科技有限公司 | Many mesh camera calibration devices |
CN107146254A (en) * | 2017-04-05 | 2017-09-08 | 西安电子科技大学 | The Camera extrinsic number scaling method of multicamera system |
CN107194974A (en) * | 2017-05-23 | 2017-09-22 | 哈尔滨工业大学 | A kind of raising method of many mesh Camera extrinsic stated accuracies based on multiple identification scaling board image |
-
2017
- 2017-10-31 CN CN201711046768.2A patent/CN107767424A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101226638A (en) * | 2007-01-18 | 2008-07-23 | 中国科学院自动化研究所 | Method and apparatus for standardization of multiple camera system |
CN103279949A (en) * | 2013-05-09 | 2013-09-04 | 浙江大学 | Operation method of self-positioning robot-based multi-camera parameter automatic calibration system |
CN106363304A (en) * | 2016-08-19 | 2017-02-01 | 武汉华工激光工程有限责任公司 | Multi-camera correcting and positioning method and glass laser cutting device |
CN106598052A (en) * | 2016-12-14 | 2017-04-26 | 南京阿凡达机器人科技有限公司 | Robot security inspection method based on environment map and robot thereof |
CN106780624A (en) * | 2016-12-14 | 2017-05-31 | 广东工业大学 | A kind of polyphaser scaling method and device based on object of reference |
CN106780625A (en) * | 2016-12-19 | 2017-05-31 | 南京天祥智能设备科技有限公司 | Many mesh camera calibration devices |
CN107146254A (en) * | 2017-04-05 | 2017-09-08 | 西安电子科技大学 | The Camera extrinsic number scaling method of multicamera system |
CN107194974A (en) * | 2017-05-23 | 2017-09-22 | 哈尔滨工业大学 | A kind of raising method of many mesh Camera extrinsic stated accuracies based on multiple identification scaling board image |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11430135B2 (en) | 2018-04-11 | 2022-08-30 | Shenzhen Realis Multimedia Technology Co., Ltd. | Capture-ball-based on-ball point distribution method and motion-posture recognition method, system, and apparatus |
WO2019196192A1 (en) * | 2018-04-11 | 2019-10-17 | 深圳市瑞立视多媒体科技有限公司 | Capture ball-based sphere distribution method, motion attitude identification method and system, and device |
CN111121743A (en) * | 2018-10-30 | 2020-05-08 | 阿里巴巴集团控股有限公司 | Position calibration method and device and electronic equipment |
CN111121743B (en) * | 2018-10-30 | 2023-11-24 | 阿里巴巴集团控股有限公司 | Position calibration method and device and electronic equipment |
US10957074B2 (en) | 2019-01-29 | 2021-03-23 | Microsoft Technology Licensing, Llc | Calibrating cameras using human skeleton |
CN111829531A (en) * | 2019-04-15 | 2020-10-27 | 北京京东尚科信息技术有限公司 | Two-dimensional map construction method and device, robot positioning system and storage medium |
WO2021129791A1 (en) * | 2019-12-27 | 2021-07-01 | 深圳市瑞立视多媒体科技有限公司 | Multi-camera calibration method in large-space environment based on optical motion capture, and related device |
CN113592954B (en) * | 2019-12-27 | 2023-06-09 | 深圳市瑞立视多媒体科技有限公司 | Multi-camera calibration method and related equipment in large space environment based on optical dynamic capturing |
CN113592954A (en) * | 2019-12-27 | 2021-11-02 | 深圳市瑞立视多媒体科技有限公司 | Multi-camera calibration method based on optical dynamic capture in large space environment and related equipment |
CN112215899A (en) * | 2020-09-18 | 2021-01-12 | 深圳市瑞立视多媒体科技有限公司 | Frame data online processing method and device and computer equipment |
CN112215899B (en) * | 2020-09-18 | 2024-01-30 | 深圳市瑞立视多媒体科技有限公司 | Frame data online processing method and device and computer equipment |
CN112613469A (en) * | 2020-12-30 | 2021-04-06 | 深圳市优必选科技股份有限公司 | Motion control method of target object and related equipment |
CN112613469B (en) * | 2020-12-30 | 2023-12-19 | 深圳市优必选科技股份有限公司 | Target object motion control method and related equipment |
CN113449623B (en) * | 2021-06-21 | 2022-06-28 | 浙江康旭科技有限公司 | Light living body detection method based on deep learning |
CN113449623A (en) * | 2021-06-21 | 2021-09-28 | 浙江康旭科技有限公司 | Light living body detection method based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107767424A (en) | Scaling method, multicamera system and the terminal device of multicamera system | |
CN109816703B (en) | Point cloud registration method based on camera calibration and ICP algorithm | |
CN112132972B (en) | Three-dimensional reconstruction method and system for fusing laser and image data | |
CN105160702B (en) | The stereopsis dense Stereo Matching method and system aided in based on LiDAR point cloud | |
CN111275750B (en) | Indoor space panoramic image generation method based on multi-sensor fusion | |
JP7018566B2 (en) | Image pickup device, image processing method and program | |
CN105225269A (en) | Based on the object modelling system of motion | |
WO2015024407A1 (en) | Power robot based binocular vision navigation system and method based on | |
CN111914715A (en) | Intelligent vehicle target real-time detection and positioning method based on bionic vision | |
CN107636679A (en) | A kind of obstacle detection method and device | |
CN107808402A (en) | Scaling method, multicamera system and the terminal device of multicamera system | |
CN110501036A (en) | The calibration inspection method and device of sensor parameters | |
CN206611521U (en) | A kind of vehicle environment identifying system and omni-directional visual module based on multisensor | |
CN105867611A (en) | Space positioning method, device and system in virtual reality system | |
CN110889873A (en) | Target positioning method and device, electronic equipment and storage medium | |
CN112097732A (en) | Binocular camera-based three-dimensional distance measurement method, system, equipment and readable storage medium | |
Mordohai et al. | Real-time video-based reconstruction of urban environments | |
CN112837207A (en) | Panoramic depth measuring method, four-eye fisheye camera and binocular fisheye camera | |
CN115035235A (en) | Three-dimensional reconstruction method and device | |
CN113920183A (en) | Monocular vision-based vehicle front obstacle distance measurement method | |
Yuan et al. | Fast localization and tracking using event sensors | |
CN116309813A (en) | Solid-state laser radar-camera tight coupling pose estimation method | |
CN210986289U (en) | Four-eye fisheye camera and binocular fisheye camera | |
CN108090930A (en) | Barrier vision detection system and method based on binocular solid camera | |
CN115937810A (en) | Sensor fusion method based on binocular camera guidance |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180306 |
|
RJ01 | Rejection of invention patent application after publication |