CN105631901A - Method and device for determining movement information of to-be-detected object - Google Patents

Method and device for determining movement information of to-be-detected object Download PDF

Info

Publication number
CN105631901A
CN105631901A CN201610096765.9A CN201610096765A CN105631901A CN 105631901 A CN105631901 A CN 105631901A CN 201610096765 A CN201610096765 A CN 201610096765A CN 105631901 A CN105631901 A CN 105631901A
Authority
CN
China
Prior art keywords
point image
image
point
measured
labelling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610096765.9A
Other languages
Chinese (zh)
Inventor
陆真国
王金亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Le Xiang Science And Technology Ltd
Original Assignee
Shanghai Le Xiang Science And Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Le Xiang Science And Technology Ltd filed Critical Shanghai Le Xiang Science And Technology Ltd
Priority to CN201610096765.9A priority Critical patent/CN105631901A/en
Publication of CN105631901A publication Critical patent/CN105631901A/en
Priority to PCT/CN2016/096379 priority patent/WO2017143745A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a method and device for determining the movement information of a to-be-detected object. According to the embodiment of the invention, the method comprises the steps: obtaining an N-th image frame, collected by a camera apparatus, of the to-be-detected object; determining the corresponding relation between mark point images and physical mark points according to the mark point image in the N-th image frame; and determining the movement information of the to-be-detected object in N-th image frame at a corresponding moment according to the position information of all physical mark points and all mark point images. According to the invention, the method determines the movement information of the to-be-detected object based on the position information of the mark point images and the position information of the physical mark points through the corresponding relation between the mark point images and the physical mark points. Compared with a method for obtaining a rotating attitude through employing a gyroscope or other sensors in the prior art, the method can effectively determine the translation amount of the to-be-detected object, senses the movement state of the to-be-detected object more accurately and quickly, is higher in instantaneity, and remarkably improves the actual experience of a user.

Description

A kind of method of movable information determining object to be measured and device
Technical field
The present invention relates to technical field of virtual reality, particularly relate to method and the device of a kind of movable information determining object to be measured.
Background technology
Virtual implementing helmet, refers to a kind of by utilizing Helmet Mounted Display the vision to external world of people, audition to be closed, guides user to produce the helmet of a kind of sensation in virtual environment. Development along with electronic technology, virtual implementing helmet has allowed for user and controls virtual screen by the sensing means of multiple advanced person according to oneself viewpoint in virtual environment and position, specifically, use in the process of virtual implementing helmet user, by the kinestate of perception user's head, thus presenting different scenes for user. One important experience of virtual implementing helmet is exactly feeling of immersion, accordingly, it is capable to no accurate quick sensing is the important indicator affecting virtual implementing helmet performance to the kinestate of user's head.
At present, owing to common accelerometer cannot accurately obtain spatial translation vector, major part virtual implementing helmet obtains rotation attitude only by sensors such as gyroscopes, but, adopt in this way, use in the process of virtual implementing helmet user, lack the perception to user's translational head motions (namely the translational motion of virtual implementing helmet), there is deviation in the perception causing the kinestate to virtual implementing helmet, significantly impacts the real experiences of user.
To sum up, need badly at present a kind of can the method for perception virtual implementing helmet kinestate quickly and accurately.
Summary of the invention
The embodiment of the present invention provides method and the device of a kind of movable information determining object to be measured, in order to realize perception virtual implementing helmet kinestate quickly and accurately.
The method of a kind of movable information determining object to be measured that the embodiment of the present invention provides, including:
Obtaining the nth frame image of the object to be measured that camera head collects, described nth frame image includes the labelling point image of each physical markings point of described object to be measured first side;
According to the labelling point image in described nth frame image, it is determined that the corresponding relation of described labelling point image and described physical markings point;
Obtain each labelling point image positional information in default image coordinate system of each physical markings point of described object to be measured first side positional information in default world coordinate system and described nth frame image;
Corresponding relation according to described labelling point image Yu described physical markings point, and the positional information of each physical markings point described and each labelling point image described, it is determined that described object to be measured is at the movable information in described nth frame image correspondence moment.
It is preferred that described according to the labelling point image in described nth frame image, it is determined that the corresponding relation of described labelling point image and described physical markings point, including:
The reference marker point image in described nth frame image is determined based on envelope method; Described reference marker point image is in the labelling point image in described nth frame image;
Position relationship according to the labelling point image in described nth frame image Yu described reference marker point image, it is determined that the numbering of each labelling point image in described nth frame image;
The physical markings point identical with the numbering of described labelling point image is defined as the physical markings point that institute's labelling point image is corresponding, obtains the corresponding relation of described labelling point image and described physical markings point; The numbering of described physical markings point and the numbering of described labelling point image are based on and are identically numbered rule and obtain; The physical markings point of described object to be measured first side is convex polygon array distribution.
It is preferred that the position relationship according to the labelling point image in described nth frame image Yu described reference marker point image, it is determined that the numbering of each labelling point image in described nth frame image, including:
Position relationship according to the labelling point image in described nth frame image Yu described reference marker point image, it is determined that the sequence of the labelling point image of ground floor and the labelling point image of described ground floor; Described reference marker point image is the labelling point image of described ground floor;
Position relationship according to the labelling point image except the labelling point image of described ground floor to M-1 layer in described nth frame image Yu described reference marker point image, it is determined that the sequence of the labelling point image of M shell and the labelling point image of described M shell; M is the integer be more than or equal to 2;
The sequence of the labelling point image according to described ground floor to M shell, it is determined that described ground floor is to the numbering of the labelling point image of M shell.
It is preferred that after the nth frame image of object to be measured that collects of described acquisition camera head, according to the labelling point image in described nth frame image, it is determined that before described labelling point image and the corresponding relation of described physical markings point, also include:
Number according at least to the pixel comprised in the number of the pixel on the pixel value of each pixel of described nth frame image, each profile and profile, it is determined that go out described labelling point image.
It is preferred that according at least to the number of the pixel comprised in the number of the pixel on the pixel value of each pixel of described nth frame image, each profile and profile, it is determined that go out described labelling point image, including:
The pixel value of each pixel according to described nth frame image, obtains the first alternate labels point image; The pixel value of the pixel of described first alternate labels point image is be more than or equal to first threshold;
Number according to the pixel on the profile of the first alternate labels point image each described, obtains the second alternate labels point image; The number of the pixel on the profile of described second alternate labels point image is be more than or equal to Second Threshold and less than or equal to the 3rd threshold value;
Number according to the pixel comprised in the profile of the second alternate labels point image each described, obtains the 3rd alternate labels point image; The number of the pixel comprised in the profile of described 3rd alternate labels point image is be more than or equal to the 4th threshold value;
Determining the elliptic parameter of described 3rd alternate labels point image, the 3rd alternate labels point image that elliptic parameter meets preset parameter range is defined as described labelling point image.
Preferably, corresponding relation according to described labelling point image Yu described physical markings point, and the positional information of each physical markings point described and each labelling point image described, it is determined that described object to be measured at the movable information in described nth frame image correspondence moment, including:
Corresponding relation according to described labelling point image Yu described physical markings point, and the positional information of each physical markings point described and each labelling point image described, utilize PnP algorithm determine described camera head relative to described object to be measured rotation amount and translational movement;
According to the described camera head rotation amount relative to described object to be measured and translational movement, obtain the described object to be measured movable information in the described nth frame image correspondence moment; The described object to be measured movable information in the described nth frame image correspondence moment is described object to be measured at the rotation amount relative to described camera head of the described nth frame image correspondence moment and translational movement.
It is preferred that the described PnP of utilization algorithm determine described camera head relative to described object to be measured rotation amount and translational movement after, it is determined that described object to be measured, before the movable information in described nth frame image correspondence moment, also includes:
Adopt LM algorithm to described camera head relative to described object to be measured rotation amount and translational movement be optimized.
It is preferred that described physical markings point is infrared point; Described labelling point image is infrared point image;
Described determine that described object to be measured is after the movable information in described nth frame image correspondence moment, also includes:
Determine the described object to be measured movable information when in the described nth frame image correspondence moment meets predetermined movement weight range, close the infrared point of described object to be measured first side, open the infrared point of described object to be measured second side; Described second side is arrive at the motion information prediction in described nth frame image correspondence moment according to described object to be measured;
Obtain the N+1 two field picture that described camera head collects;
Judge the infrared point image whether including each infrared point of described object to be measured second side in described N+1 two field picture, if so, then determine the described object to be measured movable information in the described N+1 two field picture correspondence moment according to described N+1 two field picture; If it is not, then close the infrared point of described object to be measured second side, open the infrared point of described object to be measured 3rd side, and obtain the N+2 two field picture that described camera head collects; Described 3rd side is obtain according to the circular order preset.
The embodiment of the present invention provides the device of a kind of movable information determining object to be measured, including:
First acquisition module, for obtaining the nth frame image of the object to be measured that camera head collects, described nth frame image includes the labelling point image of each physical markings point of described object to be measured first side;
Determine module, for according to the labelling point image in described nth frame image, it is determined that the corresponding relation of described labelling point image and described physical markings point;
Second acquisition module, is used for each labelling point image positional information in default image coordinate system of each physical markings point obtaining described object to be measured first side positional information in default world coordinate system and described nth frame image;
Processing module, for the corresponding relation according to described labelling point image Yu described physical markings point, and the positional information of each physical markings point described and each labelling point image described, it is determined that described object to be measured is at the movable information in described nth frame image correspondence moment.
It is preferred that described determine module specifically for:
The reference marker point image in described nth frame image is determined based on envelope method; Described reference marker point image is in the labelling point image in described nth frame image;
Position relationship according to the labelling point image in described nth frame image Yu described reference marker point image, it is determined that the numbering of each labelling point image in described nth frame image;
The physical markings point identical with the numbering of described labelling point image is defined as the physical markings point that institute's labelling point image is corresponding, obtains the corresponding relation of described labelling point image and described physical markings point; The numbering of described physical markings point and the numbering of described labelling point image are based on and are identically numbered rule and obtain; The physical markings point of described object to be measured first side is convex polygon array distribution.
It is preferred that described determine module specifically for:
Position relationship according to the labelling point image in described nth frame image Yu described reference marker point image, it is determined that the sequence of the labelling point image of ground floor and the labelling point image of described ground floor; Described reference marker point image is the labelling point image of described ground floor;
Position relationship according to the labelling point image except the labelling point image of described ground floor to M-1 layer in described nth frame image Yu described reference marker point image, it is determined that the sequence of the labelling point image of M shell and the labelling point image of described M shell; M is the integer be more than or equal to 2;
The sequence of the labelling point image according to described ground floor to M shell, it is determined that described ground floor is to the numbering of the labelling point image of M shell.
Determine that module is additionally operable to it is preferred that described:
Number according at least to the pixel comprised in the number of the pixel on the pixel value of each pixel of described nth frame image, each profile and profile, it is determined that go out described labelling point image.
It is preferred that described determine module specifically for:
The pixel value of each pixel according to described nth frame image, obtains the first alternate labels point image; The pixel value of the pixel of described first alternate labels point image is be more than or equal to first threshold;
Number according to the pixel on the profile of the first alternate labels point image each described, obtains the second alternate labels point image; The number of the pixel on the profile of described second alternate labels point image is be more than or equal to Second Threshold and less than or equal to the 3rd threshold value;
Number according to the pixel comprised in the profile of the second alternate labels point image each described, obtains the 3rd alternate labels point image; The number of the pixel comprised in the profile of described 3rd alternate labels point image is be more than or equal to the 4th threshold value;
Determining the elliptic parameter of described 3rd alternate labels point image, the 3rd alternate labels point image that elliptic parameter meets preset parameter range is defined as described labelling point image.
It is preferred that described processing module specifically for:
Corresponding relation according to described labelling point image Yu described physical markings point, and the positional information of each physical markings point described and each labelling point image described, utilize PnP algorithm determine described camera head relative to described object to be measured rotation amount and translational movement;
According to the described camera head rotation amount relative to described object to be measured and translational movement, obtain the described object to be measured movable information in the described nth frame image correspondence moment; The described object to be measured movable information in the described nth frame image correspondence moment is described object to be measured at the rotation amount relative to described camera head of the described nth frame image correspondence moment and translational movement.
It is preferred that described processing module is additionally operable to:
Adopt LM algorithm to described camera head relative to described object to be measured rotation amount and translational movement be optimized.
It is preferred that described physical markings point is infrared point; Described labelling point image is infrared point image;
Described processing module is additionally operable to:
Determine the described object to be measured movable information when in the described nth frame image correspondence moment meets predetermined movement weight range, close the infrared point of described object to be measured first side, open the infrared point of described object to be measured second side; Described second side is arrive at the motion information prediction in described nth frame image correspondence moment according to described object to be measured;
Obtain the N+1 two field picture that described camera head collects;
Judge the infrared point image whether including each infrared point of described object to be measured second side in described N+1 two field picture, if so, then determine the described object to be measured movable information in the described N+1 two field picture correspondence moment according to described N+1 two field picture; If it is not, then close the infrared point of described object to be measured second side, open the infrared point of described object to be measured 3rd side, and obtain the N+2 two field picture that described camera head collects; Described 3rd side is obtain according to the circular order preset.
In the above embodiment of the present invention, obtaining the nth frame image of the object to be measured that camera head collects, nth frame image includes the labelling point image of each physical markings point of object the first side to be measured; According to the labelling point image in nth frame image, it is determined that the corresponding relation of labelling point image and physical markings point; Obtain each labelling point image positional information in default image coordinate system of each physical markings point of object the first side to be measured positional information in default world coordinate system and nth frame image; Corresponding relation according to labelling point image Yu physical markings point, and the positional information of each physical markings point and each labelling point image, it is determined that object to be measured is at the movable information in nth frame image correspondence moment. In the embodiment of the present invention, it is determined by the corresponding relation of labelling point image and physical markings point, and based on the positional information of the positional information of labelling point image and physical markings point, determine the movable information of object to be measured, relative to the method adopting the sensors such as gyroscope to obtain rotation attitude in prior art, the embodiment of the present invention can effectively determine the translational movement of object to be measured, thus the kinestate of the object to be measured of perception more quickly and accurately, real-time is higher, it is possible to significantly improve the real experiences of user.
Accompanying drawing explanation
In order to be illustrated more clearly that the technical scheme in the embodiment of the present invention, below the accompanying drawing used required during embodiment is described is briefly introduced, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the premise not paying creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is a kind of system architecture schematic diagram that the embodiment of the present invention is suitable for;
Fig. 2 is the laying schematic diagram of each side infrared lamp on virtual implementing helmet;
The schematic flow sheet of a kind of movable information determining object to be measured that Fig. 3 provides for the embodiment of the present invention;
A kind of schematic flow sheet that image is carried out pretreatment that Fig. 4 provides for the embodiment of the present invention;
Fig. 5 provides the schematic flow sheet of the corresponding relation of the some image of calibration note really and physical markings point for the embodiment of the present invention;
The labelling point video number schematic diagram that Fig. 6 provides for the embodiment of the present invention;
Fig. 7 is the process schematic that the movable information according to object to be measured is predicted processing;
The structural representation of the device of a kind of movable information determining object to be measured that Fig. 8 provides for the embodiment of the present invention.
Detailed description of the invention
In order to make the object, technical solutions and advantages of the present invention clearly, below in conjunction with accompanying drawing, the present invention is described in further detail, it is clear that described embodiment is only a part of embodiment of the present invention, rather than whole embodiments. Based on the embodiment in the present invention, all other embodiments that those of ordinary skill in the art obtain under not making creative work premise, broadly fall into the scope of protection of the invention.
The method of the movable information of the determination object to be measured provided in the embodiment of the present invention is applicable to plurality of application scenes, and Fig. 1 is exemplary presents a kind of system architecture schematic diagram that the embodiment of the present invention is suitable for.
As it is shown in figure 1, this system architecture includes server 101, camera head 102, object 103 to be measured. Wherein, wired or wireless communication can be carried out between server 101 and camera head 102, namely wire transmission or the method transmission information being wirelessly transferred can be passed through between server 101 and camera head 102, such as, the image photographed can be sent to server 101 by wire transmission or the method being wirelessly transferred by camera head 102; Also wired or wireless communication can be carried out between server 101 and object to be measured 103, for instance, server 101 sends scene rendering data by wire transmission or the method that is wirelessly transferred to object 103 to be measured.
In the embodiment of the present invention, server 101 can be the PC main frame with data-handling capacity.
Object 103 to be measured can be virtual implementing helmet, this virtual implementing helmet includes the first to the 6th side (assuming that the normal service condition of virtual implementing helmet is to wear on a user's head, based on this, may determine that the first of the helmet to the 6th side relative to camera head respectively before, below, above, below, the left side, the right side). Being laid with infrared lamp (being alternatively referred to as infrared point) on each side, wherein, on each side, infrared lamp is all carry out laying according to the rule of laying pre-set. Specifically, for ease of subsequent arithmetic process, laying rule in the embodiment of the present invention can for all to lay the infrared lamp on each side according to the mode of convex polygon array, for different sides, can also when based on above-mentioned laying rule, consider different layings, for instance, two sides, left and right, upper and lower surface can carry out identical laying. As in figure 2 it is shown, be the laying schematic diagram of each side infrared lamp on virtual implementing helmet. It should be noted that Fig. 2 is only a kind of exemplary representation that infrared lamp is laid, concrete material object would be likely to occur the difference in ratio.
Camera head 102 can be infrared camera, it is mainly used on shooting virtual implementing helmet the status information of the infrared lamp laid, and send the image of shooting to server 101, so that server determines the movable information of virtual implementing helmet (spin matrix R and translation vector T) by correlation computations.
Specifically, in the embodiment of the present invention, virtual implementing helmet, infrared camera are connected to PC main frame, USB2.0 and USB3.0 by USB data line, it is preferred to USB3.0. Virtual implementing helmet is connected to PC main frame also by HDMI, in order to obtain scene rendering data.
Based on the system architecture shown in Fig. 1, Fig. 3 illustrates the schematic flow sheet of a kind of movable information determining object to be measured that the embodiment of the present invention provides, based on the angle of server, including:
Step 301, obtains the nth frame image of the object to be measured that camera head collects, and described nth frame image includes the labelling point image of each physical markings point of described object to be measured first side;
Step 302, according to the labelling point image in described nth frame image, it is determined that the corresponding relation of described labelling point image and described physical markings point;
Step 303, obtains each labelling point image positional information in default image coordinate system of each physical markings point of described object to be measured first side positional information in default world coordinate system and described nth frame image;
Step 304, the corresponding relation according to described labelling point image Yu described physical markings point, and the positional information of each physical markings point described and each labelling point image described, it is determined that described object to be measured is at the movable information in described nth frame image correspondence moment.
In the embodiment of the present invention, it is determined by the corresponding relation of labelling point image and physical markings point, and based on the positional information of the positional information of labelling point image and physical markings point, determine the movable information of object to be measured, relative to the method adopting the sensors such as gyroscope to obtain rotation attitude in prior art, the embodiment of the present invention can effectively determine the translational movement of object to be measured, thus the kinestate of the object to be measured of perception more quickly and accurately, real-time is higher, it is possible to significantly improve the real experiences of user.
Physical markings point in the embodiment of the present invention can be infrared point, and labelling point image is infrared point image.
In the embodiment of the present invention, in movable information (spatial translation amount) the optical tracking process of virtual implementing helmet, only the side that completely can be shot by photographic head in six sides of helmet up, down, left, right, before and after is processed all the time. Therefore, process to be determined the movable information of virtual implementing helmet by single side is illustrated below.
In the embodiment of the present invention, owing to camera head can exist the interference of ambient light and other factors when shooting image, therefore, after getting the nth frame image of the object to be measured that camera head collects, reply nth frame image carries out pretreatment, according at least to the number of the pixel comprised in the number of the pixel on the pixel value of each pixel of described nth frame image, each profile and profile, exclusive PCR image, it is determined that go out labelling point image.
Specifically, the pixel value according to the pixel of each image, obtain the first alternate labels point image; Wherein, the pixel value of the pixel of the first alternate labels point image is be more than or equal to first threshold; The number of the pixel on profile according to each the first alternate labels point image, obtains the second alternate labels point image; The number of the pixel on the profile of the second alternate labels point image is be more than or equal to Second Threshold and less than or equal to the 3rd threshold value; The number of the pixel comprised in the profile according to each the second alternate labels point image, obtains the 3rd alternate labels point image; The number of the pixel comprised in the profile of the 3rd alternate labels point image is be more than or equal to the 4th threshold value; Determining the elliptic parameter of the 3rd alternate labels point image, the 3rd alternate labels point image that elliptic parameter meets preset parameter range is defined as labelling point image. Wherein, first threshold, Second Threshold, the 3rd threshold value, the 4th threshold value and preset parameter range all rule of thumb can be arranged by those skilled in the art.
A kind of schematic flow sheet that image carries out pretreatment that Fig. 4 provides for the embodiment of the present invention, including step 401 to step 408, is specifically described below in conjunction with Fig. 4.
Step 401, obtains nth frame image;
Step 402, binary conversion treatment, obtain the first alternate labels point image; Particularly as follows: the max pixel value max of each pixel determined in nth frame image, using a*max as binary-state threshold (first threshold), travel through each pixel, if the pixel value of pixel is less than a*max, then its pixel value is set to 0, if the pixel value of pixel is be more than or equal to a*max, then its pixel value is set to 255. Wherein, a is weights, and the value of a rule of thumb can be arranged by those skilled in the art, for instance, it is possible to it is set to 0.9;
Step 403, obtain the number of pixel on the profile of each the first alternate labels point image, wherein, profile is pixel value to be changed from 0 to 255, or the position from the pixel of 255 to 0 change, specifically, can adopt in determining each profile process and ask for 8 neighborhood neighbor pixels (except region, image border, each pixel is all bordered by mutually with 8 pixels, after determining first pixel on profile, 8 adjacent with this pixel pixel can be traveled through, thus quick obtaining is to second pixel on profile, the like) method, so that the number of the pixel more quickly determined out on profile,
Step 404, for each the first alternate labels point image, the number of the pixel on deletion profile is less than Second Threshold, or the image more than the 3rd threshold value, obtains the second alternate labels point image;
Step 405, for each the second alternate labels point image, deletes the number of the pixel comprised in profile less than the image of the 4th threshold value, obtains the 3rd alternate labels point image;
Step 406, based on default image coordinate system (including x-axis and y-axis), by the elliptic parameter (including elliptical center, major and minor axis, inclination angle etc.) of fitting algorithm matching the 3rd alternate labels point image;
Step 407, for each the 3rd alternate labels point image, deletes elliptic parameter and does not meet the image of preset parameter range, and the 3rd alternate labels point image that elliptic parameter meets preset parameter range is defined as labelling point image.
Step 408, the elliptic parameter of the labelling point image that output is asked for, and the positional information of labelling point is determined according to elliptic parameter.
Nth frame image is carried out pretreatment by said process by the embodiment of the present invention, thus getting rid of the interference factor in environment rapidly and accurately, it is determined that go out labelling point image, established good basis for the follow-up corresponding relation determined between labelling point image and labelling point.
Owing to the laying situation of the infrared lamp on each side of virtual implementing helmet is not quite identical, only the process of one of them side (the first side, before namely) being illustrated in the embodiment of the present invention, other side is in like manner. In step 302, the corresponding relation between the labelling point image in nth frame image and the physical markings point of the first side is determined in the following manner: the reference marker point image determining in nth frame image based on envelope method; Wherein, reference marker point image is in the labelling point image in nth frame image; Position relationship according to labelling point image in nth frame image Yu reference marker point image, it is determined that the numbering of the labelling point image in nth frame image; The physical markings point identical with the numbering of labelling point image is defined as the physical markings point that institute's labelling point image is corresponding, obtains the corresponding relation of labelling point image and physical markings point; The numbering of physical markings point and the numbering of labelling point image are based on and are identically numbered rule and obtain.
Further, in the embodiment of the present invention, position relationship according to the labelling point image in described nth frame image Yu described reference marker point image, when determining the numbering of each labelling point image in described nth frame image, it is based on envelope method to determine layer by layer, from outermost layer to innermost layer, particularly as follows: the position relationship according to the labelling point image in nth frame image Yu reference marker point image, it is determined that the sequence of the labelling point image of ground floor and the labelling point image of ground floor; Reference marker point image is the labelling point image of ground floor; Position relationship according to the labelling point image except the labelling point image of ground floor to M-1 layer in nth frame image Yu reference marker point image, it is determined that the sequence of the labelling point image of M shell and the labelling point image of M shell; M is the integer be more than or equal to 2; The sequence of the labelling point image according to ground floor to M shell, it is determined that ground floor is to the numbering of the labelling point image of M shell.
Wherein, the concrete value of M can be arranged by the laying of those skilled in the art's infrared lamp rule of thumb and on side, and generally, the value that can arrange M is 3.
Fig. 5 provides the some image of calibration note really and the schematic flow sheet of the corresponding relation of physical markings point for the embodiment of the present invention, including step 501 to step 504, is specifically described below in conjunction with Fig. 5.
Step 501, based on algorithm of convex hull, it is determined that the labelling point image of reference marker point image and ground floor (outermost layer), and according to the distance between each labelling point image and reference marker point of ground floor, according to ascending order arrangement; Wherein, determine the process of reference marker point image particularly as follows: arranged according to y-axis coordinate ascending order by the labelling point image of nth frame image, if the corresponding multiple labelling point images of same y-axis coordinate occur, then the plurality of labelling point image is arranged according to x-axis coordinate ascending order, maximum for y-axis coordinate and that x-axis coordinate is minimum labelling point image (being namely positioned at the labelling point image in the lower left corner) is defined as reference marker point image;
Step 502, it is determined that four labelling point images of the second layer (secondary outer layer), according to the distance between this four labelling point images and reference marker point image, arranges according to order from small to large;
Step 503, it is determined that three labelling point images of third layer (innermost layer), according to the distance between these three labelling point image and reference marker point image, arranges according to order from small to large;
Step 504, according to ground floor obtained above to third layer, is sequentially carried out numbering according to order from outside to inside, obtains ground floor and is numbered 1-8, and the second layer is numbered 9-12, and third layer is numbered 13-15, as shown in Figure 6, for labelling point video number schematic diagram.
Obtain owing to the first side physical markings point is based on the above-mentioned rule that is identically numbered, therefore, if the numbering of the labelling point image determined by the way is normal, the physical markings point identical with the numbering of labelling point image can be defined as corresponding physical markings point, obtain the corresponding relation of labelling point image and physical markings point.
The embodiment of the present invention is determined based on algorithm of convex hull the corresponding relation of labelling point image and physical markings point, so that the determination of this corresponding relation is more accurate quickly, haves laid a good foundation for the follow-up movable information determining object to be measured.
In step 304, corresponding relation based on the labelling point image obtained in step 302 Yu physical markings point, and the positional information of each physical markings point obtained in step 303 and each labelling point image, utilize PnP algorithm determine camera head relative to object to be measured rotation amount and translational movement, and adopt LM algorithm to camera head relative to object to be measured rotation amount and translational movement be optimized; According to camera head relative to object to be measured optimize after rotation amount and translational movement, obtain the object to be measured movable information in the nth frame image correspondence moment; The object to be measured movable information in the nth frame image correspondence moment is object to be measured at the nth frame image correspondence moment rotation amount relative to camera head and translational movement.
Fig. 7 is the process schematic that the movable information according to object to be measured is predicted processing. The embodiment of the present invention determines that object to be measured is after the nth frame image correspondence moment rotation amount relative to camera head and translational movement, also includes prediction process as shown in Figure 7, particularly as follows:
Step 701, object to be measured is at the nth frame image corresponding moment rotation amount relative to camera head and translational movement to utilize PnP algorithm to determine with LM algorithm;
Step 702, it is determined that the object to be measured movable information in the nth frame image correspondence moment meets predetermined movement weight range, particularly as follows: whether rotation amount is within the scope of default rotation amount, whether translational movement is within the scope of default translational movement; If so, step 703 is then performed; If it is not, then perform step 701; Wherein, preset rotation amount scope and default translational movement scope all rule of thumb or can be obtained according to great many of experiments by those skilled in the art;
Step 703, goes out the second side according to the object to be measured motion information prediction in the nth frame image correspondence moment, closes the infrared point of object the first side to be measured, open the infrared point of object the second side to be measured;
Step 704, obtains the N+1 two field picture that camera head collects;
Step 705, judge the infrared point image whether including each infrared point of described object to be measured second side in described N+1 two field picture, if so, then perform step 701, determine the described object to be measured movable information in the described N+1 two field picture correspondence moment according to described N+1 two field picture; If it is not, then perform step 706;
Step 706, closes the infrared point of described object to be measured second side, opens the infrared point of described object to be measured 3rd side, and obtains the N+2 two field picture that described camera head collects; Wherein, the 3rd side is obtain according to the circular order preset. Such as, the circular order preset can be above, the left side, the right side, above, below, below.
Step 707, judge the infrared point image whether including each infrared point of described object to be measured 3rd side in described N+2 two field picture, if so, then perform step 701, determine the described object to be measured movable information in the described N+1 two field picture correspondence moment according to described N+2 two field picture; If it is not, then perform step 706, according to default circular order, the infrared point of lower one side is opened in circulation.
For said method flow process, the embodiment of the present invention also provides for the device of a kind of movable information determining object to be measured, and the particular content of this device is referred to said method to be implemented.
The structural representation of the device of a kind of movable information determining object to be measured that Fig. 8 provides for the embodiment of the present invention.
First acquisition module 801, for obtaining the nth frame image of the object to be measured that camera head collects, described nth frame image includes the labelling point image of each physical markings point of described object to be measured first side;
Determine module 802, for according to the labelling point image in described nth frame image, it is determined that the corresponding relation of described labelling point image and described physical markings point;
Second acquisition module 803, is used for each labelling point image positional information in default image coordinate system of each physical markings point obtaining described object to be measured first side positional information in default world coordinate system and described nth frame image;
Processing module 804, for the corresponding relation according to described labelling point image Yu described physical markings point, and the positional information of each physical markings point described and each labelling point image described, it is determined that described object to be measured is at the movable information in described nth frame image correspondence moment.
It is preferred that described determine module 802 specifically for:
The reference marker point image in described nth frame image is determined based on envelope method; Described reference marker point image is in the labelling point image in described nth frame image;
Position relationship according to the labelling point image in described nth frame image Yu described reference marker point image, it is determined that the numbering of each labelling point image in described nth frame image;
The physical markings point identical with the numbering of described labelling point image is defined as the physical markings point that institute's labelling point image is corresponding, obtains the corresponding relation of described labelling point image and described physical markings point; The numbering of described physical markings point and the numbering of described labelling point image are based on and are identically numbered rule and obtain; The physical markings point of described object to be measured first side is convex polygon array distribution.
It is preferred that described determine module 802 specifically for:
Position relationship according to the labelling point image in described nth frame image Yu described reference marker point image, it is determined that the sequence of the labelling point image of ground floor and the labelling point image of described ground floor; Described reference marker point image is the labelling point image of described ground floor;
Position relationship according to the labelling point image except the labelling point image of described ground floor to M-1 layer in described nth frame image Yu described reference marker point image, it is determined that the sequence of the labelling point image of M shell and the labelling point image of described M shell; M is the integer be more than or equal to 2;
The sequence of the labelling point image according to described ground floor to M shell, it is determined that described ground floor is to the numbering of the labelling point image of M shell.
Determine that module 802 is additionally operable to it is preferred that described:
Number according at least to the pixel comprised in the number of the pixel on the pixel value of each pixel of described nth frame image, each profile and profile, it is determined that go out described labelling point image.
It is preferred that described determine module 802 specifically for:
The pixel value of each pixel according to described nth frame image, obtains the first alternate labels point image; The pixel value of the pixel of described first alternate labels point image is be more than or equal to first threshold;
Number according to the pixel on the profile of the first alternate labels point image each described, obtains the second alternate labels point image; The number of the pixel on the profile of described second alternate labels point image is be more than or equal to Second Threshold and less than or equal to the 3rd threshold value;
Number according to the pixel comprised in the profile of the second alternate labels point image each described, obtains the 3rd alternate labels point image; The number of the pixel comprised in the profile of described 3rd alternate labels point image is be more than or equal to the 4th threshold value;
Determining the elliptic parameter of described 3rd alternate labels point image, the 3rd alternate labels point image that elliptic parameter meets preset parameter range is defined as described labelling point image.
It is preferred that described processing module 804 specifically for:
Corresponding relation according to described labelling point image Yu described physical markings point, and the positional information of each physical markings point described and each labelling point image described, utilize PnP algorithm determine described camera head relative to described object to be measured rotation amount and translational movement;
According to the described camera head rotation amount relative to described object to be measured and translational movement, obtain the described object to be measured movable information in the described nth frame image correspondence moment; The described object to be measured movable information in the described nth frame image correspondence moment is described object to be measured at the rotation amount relative to described camera head of the described nth frame image correspondence moment and translational movement.
It is preferred that described processing module 804 is additionally operable to:
Adopt LM algorithm to described camera head relative to described object to be measured rotation amount and translational movement be optimized.
It is preferred that described physical markings point is infrared point; Described labelling point image is infrared point image;
Described processing module 804 is additionally operable to:
Determine the described object to be measured movable information when in the described nth frame image correspondence moment meets predetermined movement weight range, close the infrared point of described object to be measured first side, open the infrared point of described object to be measured second side; Described second side is arrive at the motion information prediction in described nth frame image correspondence moment according to described object to be measured;
Obtain the N+1 two field picture that described camera head collects;
Judge the infrared point image whether including each infrared point of described object to be measured second side in described N+1 two field picture, if so, then determine the described object to be measured movable information in the described N+1 two field picture correspondence moment according to described N+1 two field picture; If it is not, then close the infrared point of described object to be measured second side, open the infrared point of described object to be measured 3rd side, and obtain the N+2 two field picture that described camera head collects; Described 3rd side is obtain according to the circular order preset.
It can be seen from the above:
In embodiments of the invention, obtaining the nth frame image of the object to be measured that camera head collects, nth frame image includes the labelling point image of each physical markings point of object the first side to be measured; According to the labelling point image in nth frame image, it is determined that the corresponding relation of labelling point image and physical markings point; Obtain each labelling point image positional information in default image coordinate system of each physical markings point of object the first side to be measured positional information in default world coordinate system and nth frame image; Corresponding relation according to labelling point image Yu physical markings point, and the positional information of each physical markings point and each labelling point image, it is determined that object to be measured is at the movable information in nth frame image correspondence moment. In the embodiment of the present invention, it is determined by the corresponding relation of labelling point image and physical markings point, and based on the positional information of the positional information of labelling point image and physical markings point, determine the movable information of object to be measured, relative to the method adopting the sensors such as gyroscope to obtain rotation attitude in prior art, the embodiment of the present invention can effectively determine the translational movement of object to be measured, thus the kinestate of the object to be measured of perception more quickly and accurately, real-time is higher, it is possible to significantly improve the real experiences of user.
Those skilled in the art are it should be appreciated that embodiments of the invention can be provided as method or computer program. Therefore, the present invention can adopt the form of complete hardware embodiment, complete software implementation or the embodiment in conjunction with software and hardware aspect. And, the present invention can adopt the form at one or more upper computer programs implemented of computer-usable storage medium (including but not limited to disk memory, CD-ROM, optical memory etc.) wherein including computer usable program code.
The present invention is that flow chart and/or block diagram with reference to method according to embodiments of the present invention, equipment (system) and computer program describe. It should be understood that can by the combination of the flow process in each flow process in computer program instructions flowchart and/or block diagram and/or square frame and flow chart and/or block diagram and/or square frame. These computer program instructions can be provided to produce a machine to the processor of general purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device so that the instruction performed by the processor of computer or other programmable data processing device is produced for realizing the device of function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions may be alternatively stored in and can guide in the computer-readable memory that computer or other programmable data processing device work in a specific way, the instruction making to be stored in this computer-readable memory produces to include the manufacture of command device, and this command device realizes the function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
These computer program instructions also can be loaded in computer or other programmable data processing device, make on computer or other programmable devices, to perform sequence of operations step to produce computer implemented process, thus the instruction performed on computer or other programmable devices provides for realizing the step of function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame.
Although preferred embodiments of the present invention have been described, but those skilled in the art are once know basic creative concept, then these embodiments can be made other change and amendment. So, claims are intended to be construed to include preferred embodiment and fall into all changes and the amendment of the scope of the invention.
Obviously, the present invention can be carried out various change and modification without deviating from the spirit and scope of the present invention by those skilled in the art. So, if these amendments of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention is also intended to comprise these change and modification.

Claims (16)

1. the method for the movable information determining object to be measured, it is characterised in that including:
Obtaining the nth frame image of the object to be measured that camera head collects, described nth frame image includes the labelling point image of each physical markings point of described object to be measured first side;
According to the labelling point image in described nth frame image, it is determined that the corresponding relation of described labelling point image and described physical markings point;
Obtain each labelling point image positional information in default image coordinate system of each physical markings point of described object to be measured first side positional information in default world coordinate system and described nth frame image;
Corresponding relation according to described labelling point image Yu described physical markings point, and the positional information of each physical markings point described and each labelling point image described, it is determined that described object to be measured is at the movable information in described nth frame image correspondence moment.
2. the method for claim 1, it is characterised in that described according to the labelling point image in described nth frame image, it is determined that the corresponding relation of described labelling point image and described physical markings point, including:
The reference marker point image in described nth frame image is determined based on envelope method; Described reference marker point image is in the labelling point image in described nth frame image;
Position relationship according to the labelling point image in described nth frame image Yu described reference marker point image, it is determined that the numbering of each labelling point image in described nth frame image;
The physical markings point identical with the numbering of described labelling point image is defined as the physical markings point that institute's labelling point image is corresponding, obtains the corresponding relation of described labelling point image and described physical markings point; The numbering of described physical markings point and the numbering of described labelling point image are based on and are identically numbered rule and obtain; The physical markings point of described object to be measured first side is convex polygon array distribution.
3. method as claimed in claim 2, it is characterised in that the position relationship according to the labelling point image in described nth frame image Yu described reference marker point image, it is determined that the numbering of each labelling point image in described nth frame image, including:
Position relationship according to the labelling point image in described nth frame image Yu described reference marker point image, it is determined that the sequence of the labelling point image of ground floor and the labelling point image of described ground floor; Described reference marker point image is the labelling point image of described ground floor;
Position relationship according to the labelling point image except the labelling point image of described ground floor to M-1 layer in described nth frame image Yu described reference marker point image, it is determined that the sequence of the labelling point image of M shell and the labelling point image of described M shell; M is the integer be more than or equal to 2;
The sequence of the labelling point image according to described ground floor to M shell, it is determined that described ground floor is to the numbering of the labelling point image of M shell.
4. the method for claim 1, it is characterized in that, after the nth frame image of the object to be measured that described acquisition camera head collects, according to the labelling point image in described nth frame image, before determining the corresponding relation of described labelling point image and described physical markings point, also include:
Number according at least to the pixel comprised in the number of the pixel on the pixel value of each pixel of described nth frame image, each profile and profile, it is determined that go out described labelling point image.
5. method as claimed in claim 4, it is characterised in that according at least to the number of the pixel comprised in the number of the pixel on the pixel value of each pixel of described nth frame image, each profile and profile, it is determined that go out described labelling point image, including:
The pixel value of each pixel according to described nth frame image, obtains the first alternate labels point image; The pixel value of the pixel of described first alternate labels point image is be more than or equal to first threshold;
Number according to the pixel on the profile of the first alternate labels point image each described, obtains the second alternate labels point image; The number of the pixel on the profile of described second alternate labels point image is be more than or equal to Second Threshold and less than or equal to the 3rd threshold value;
Number according to the pixel comprised in the profile of the second alternate labels point image each described, obtains the 3rd alternate labels point image; The number of the pixel comprised in the profile of described 3rd alternate labels point image is be more than or equal to the 4th threshold value;
Determining the elliptic parameter of described 3rd alternate labels point image, the 3rd alternate labels point image that elliptic parameter meets preset parameter range is defined as described labelling point image.
6. the method for claim 1, it is characterized in that, corresponding relation according to described labelling point image Yu described physical markings point, and the positional information of each physical markings point described and each labelling point image described, determine the described object to be measured movable information in the described nth frame image correspondence moment, including:
Corresponding relation according to described labelling point image Yu described physical markings point, and the positional information of each physical markings point described and each labelling point image described, utilize PnP algorithm determine described camera head relative to described object to be measured rotation amount and translational movement;
According to the described camera head rotation amount relative to described object to be measured and translational movement, obtain the described object to be measured movable information in the described nth frame image correspondence moment; The described object to be measured movable information in the described nth frame image correspondence moment is described object to be measured at the rotation amount relative to described camera head of the described nth frame image correspondence moment and translational movement.
7. method as claimed in claim 6, it is characterized in that, the described PnP of utilization algorithm determine described camera head relative to described object to be measured rotation amount and translational movement after, it is determined that described object to be measured, before the movable information in described nth frame image correspondence moment, also includes:
Adopt LM algorithm to described camera head relative to described object to be measured rotation amount and translational movement be optimized.
8. the method for claim 1, it is characterised in that described physical markings point is infrared point; Described labelling point image is infrared point image;
Described determine that described object to be measured is after the movable information in described nth frame image correspondence moment, also includes:
Determine the described object to be measured movable information when in the described nth frame image correspondence moment meets predetermined movement weight range, close the infrared point of described object to be measured first side, open the infrared point of described object to be measured second side; Described second side is arrive at the motion information prediction in described nth frame image correspondence moment according to described object to be measured;
Obtain the N+1 two field picture that described camera head collects;
Judge the infrared point image whether including each infrared point of described object to be measured second side in described N+1 two field picture, if so, then determine the described object to be measured movable information in the described N+1 two field picture correspondence moment according to described N+1 two field picture; If it is not, then close the infrared point of described object to be measured second side, open the infrared point of described object to be measured 3rd side, and obtain the N+2 two field picture that described camera head collects; Described 3rd side is obtain according to the circular order preset.
9. the device of the movable information determining object to be measured, it is characterised in that including:
First acquisition module, for obtaining the nth frame image of the object to be measured that camera head collects, described nth frame image includes the labelling point image of each physical markings point of described object to be measured first side;
Determine module, for according to the labelling point image in described nth frame image, it is determined that the corresponding relation of described labelling point image and described physical markings point;
Second acquisition module, is used for each labelling point image positional information in default image coordinate system of each physical markings point obtaining described object to be measured first side positional information in default world coordinate system and described nth frame image;
Processing module, for the corresponding relation according to described labelling point image Yu described physical markings point, and the positional information of each physical markings point described and each labelling point image described, it is determined that described object to be measured is at the movable information in described nth frame image correspondence moment.
10. device as claimed in claim 9, it is characterised in that described determine module specifically for:
The reference marker point image in described nth frame image is determined based on envelope method; Described reference marker point image is in the labelling point image in described nth frame image;
Position relationship according to the labelling point image in described nth frame image Yu described reference marker point image, it is determined that the numbering of each labelling point image in described nth frame image;
The physical markings point identical with the numbering of described labelling point image is defined as the physical markings point that institute's labelling point image is corresponding, obtains the corresponding relation of described labelling point image and described physical markings point; The numbering of described physical markings point and the numbering of described labelling point image are based on and are identically numbered rule and obtain; The physical markings point of described object to be measured first side is convex polygon array distribution.
11. device as claimed in claim 10, it is characterised in that described determine module specifically for:
Position relationship according to the labelling point image in described nth frame image Yu described reference marker point image, it is determined that the sequence of the labelling point image of ground floor and the labelling point image of described ground floor; Described reference marker point image is the labelling point image of described ground floor;
Position relationship according to the labelling point image except the labelling point image of described ground floor to M-1 layer in described nth frame image Yu described reference marker point image, it is determined that the sequence of the labelling point image of M shell and the labelling point image of described M shell; M is the integer be more than or equal to 2;
The sequence of the labelling point image according to described ground floor to M shell, it is determined that described ground floor is to the numbering of the labelling point image of M shell.
12. device as claimed in claim 9, it is characterised in that described determine that module is additionally operable to:
Number according at least to the pixel comprised in the number of the pixel on the pixel value of each pixel of described nth frame image, each profile and profile, it is determined that go out described labelling point image.
13. device as claimed in claim 12, it is characterised in that described determine module specifically for:
The pixel value of each pixel according to described nth frame image, obtains the first alternate labels point image; The pixel value of the pixel of described first alternate labels point image is be more than or equal to first threshold;
Number according to the pixel on the profile of the first alternate labels point image each described, obtains the second alternate labels point image; The number of the pixel on the profile of described second alternate labels point image is be more than or equal to Second Threshold and less than or equal to the 3rd threshold value;
Number according to the pixel comprised in the profile of the second alternate labels point image each described, obtains the 3rd alternate labels point image; The number of the pixel comprised in the profile of described 3rd alternate labels point image is be more than or equal to the 4th threshold value;
Determining the elliptic parameter of described 3rd alternate labels point image, the 3rd alternate labels point image that elliptic parameter meets preset parameter range is defined as described labelling point image.
14. device as claimed in claim 9, it is characterised in that described processing module specifically for:
Corresponding relation according to described labelling point image Yu described physical markings point, and the positional information of each physical markings point described and each labelling point image described, utilize PnP algorithm determine described camera head relative to described object to be measured rotation amount and translational movement;
According to the described camera head rotation amount relative to described object to be measured and translational movement, obtain the described object to be measured movable information in the described nth frame image correspondence moment; The described object to be measured movable information in the described nth frame image correspondence moment is described object to be measured at the rotation amount relative to described camera head of the described nth frame image correspondence moment and translational movement.
15. device as claimed in claim 14, it is characterised in that described processing module is additionally operable to:
Adopt LM algorithm to described camera head relative to described object to be measured rotation amount and translational movement be optimized.
16. device as claimed in claim 9, it is characterised in that described physical markings point is infrared point; Described labelling point image is infrared point image;
Described processing module is additionally operable to:
Determine the described object to be measured movable information when in the described nth frame image correspondence moment meets predetermined movement weight range, close the infrared point of described object to be measured first side, open the infrared point of described object to be measured second side; Described second side is arrive at the motion information prediction in described nth frame image correspondence moment according to described object to be measured;
Obtain the N+1 two field picture that described camera head collects;
Judge the infrared point image whether including each infrared point of described object to be measured second side in described N+1 two field picture, if so, then determine the described object to be measured movable information in the described N+1 two field picture correspondence moment according to described N+1 two field picture; If it is not, then close the infrared point of described object to be measured second side, open the infrared point of described object to be measured 3rd side, and obtain the N+2 two field picture that described camera head collects; Described 3rd side is obtain according to the circular order preset.
CN201610096765.9A 2016-02-22 2016-02-22 Method and device for determining movement information of to-be-detected object Pending CN105631901A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610096765.9A CN105631901A (en) 2016-02-22 2016-02-22 Method and device for determining movement information of to-be-detected object
PCT/CN2016/096379 WO2017143745A1 (en) 2016-02-22 2016-08-23 Method and apparatus for determining movement information of to-be-detected object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610096765.9A CN105631901A (en) 2016-02-22 2016-02-22 Method and device for determining movement information of to-be-detected object

Publications (1)

Publication Number Publication Date
CN105631901A true CN105631901A (en) 2016-06-01

Family

ID=56046788

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610096765.9A Pending CN105631901A (en) 2016-02-22 2016-02-22 Method and device for determining movement information of to-be-detected object

Country Status (2)

Country Link
CN (1) CN105631901A (en)
WO (1) WO2017143745A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106028001A (en) * 2016-07-20 2016-10-12 上海乐相科技有限公司 Optical positioning method and device
CN106780609A (en) * 2016-11-28 2017-05-31 中国电子科技集团公司第三研究所 Vision positioning method and vision positioning device
WO2017143745A1 (en) * 2016-02-22 2017-08-31 上海乐相科技有限公司 Method and apparatus for determining movement information of to-be-detected object
CN107293182A (en) * 2017-07-19 2017-10-24 深圳国泰安教育技术股份有限公司 A kind of vehicle teaching method, system and terminal device based on VR
WO2017219736A1 (en) * 2016-06-22 2017-12-28 北京蚁视科技有限公司 Display method for virtual player for playing motion video in virtual reality
CN108510545A (en) * 2018-03-30 2018-09-07 京东方科技集团股份有限公司 Space-location method, space orientation equipment, space positioning system and computer readable storage medium
CN108769668A (en) * 2018-05-31 2018-11-06 歌尔股份有限公司 Method for determining position and device of the pixel in VR display screens in camera imaging
CN111207747A (en) * 2018-11-21 2020-05-29 中国科学院沈阳自动化研究所 Spatial positioning method based on HoloLens glasses
WO2023060717A1 (en) * 2021-10-13 2023-04-20 中山大学 High-precision positioning method and system for object surface

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378922B (en) * 2019-06-10 2022-11-08 五邑大学 Smooth image generation method and device based on adaptive threshold segmentation algorithm
CN113111687B (en) * 2020-01-13 2024-06-18 阿里巴巴集团控股有限公司 Data processing method, system and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100194879A1 (en) * 2007-07-10 2010-08-05 Koninklijke Philips Electronics N.V. Object motion capturing system and method
CN101894377A (en) * 2010-06-07 2010-11-24 中国科学院计算技术研究所 Tracking method of three-dimensional mark point sequence and system thereof
CN103198492A (en) * 2013-03-28 2013-07-10 沈阳航空航天大学 Human motion capture method
CN103488291A (en) * 2013-09-09 2014-01-01 北京诺亦腾科技有限公司 Immersion virtual reality system based on motion capture
CN104616292A (en) * 2015-01-19 2015-05-13 南开大学 Monocular vision measurement method based on global homography matrix

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103315739B (en) * 2013-05-22 2015-08-19 华东师范大学 The nuclear magnetic resonance image method and system of motion artifacts is exempted based on Dynamic Tracing Technology
CN104298345B (en) * 2014-07-28 2017-05-17 浙江工业大学 Control method for man-machine interaction system
CN104463108B (en) * 2014-11-21 2018-07-31 山东大学 A kind of monocular real time target recognitio and pose measuring method
CN105631901A (en) * 2016-02-22 2016-06-01 上海乐相科技有限公司 Method and device for determining movement information of to-be-detected object

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100194879A1 (en) * 2007-07-10 2010-08-05 Koninklijke Philips Electronics N.V. Object motion capturing system and method
CN101894377A (en) * 2010-06-07 2010-11-24 中国科学院计算技术研究所 Tracking method of three-dimensional mark point sequence and system thereof
CN103198492A (en) * 2013-03-28 2013-07-10 沈阳航空航天大学 Human motion capture method
CN103488291A (en) * 2013-09-09 2014-01-01 北京诺亦腾科技有限公司 Immersion virtual reality system based on motion capture
CN104616292A (en) * 2015-01-19 2015-05-13 南开大学 Monocular vision measurement method based on global homography matrix

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017143745A1 (en) * 2016-02-22 2017-08-31 上海乐相科技有限公司 Method and apparatus for determining movement information of to-be-detected object
WO2017219736A1 (en) * 2016-06-22 2017-12-28 北京蚁视科技有限公司 Display method for virtual player for playing motion video in virtual reality
CN107528993A (en) * 2016-06-22 2017-12-29 北京蚁视科技有限公司 For the display methods for the virtual player that sport video is played in virtual reality
CN107528993B (en) * 2016-06-22 2019-09-20 北京蚁视科技有限公司 For playing the display methods of the virtual player of sport video in virtual reality
CN106028001B (en) * 2016-07-20 2019-01-04 上海乐相科技有限公司 A kind of optical positioning method and device
CN106028001A (en) * 2016-07-20 2016-10-12 上海乐相科技有限公司 Optical positioning method and device
CN106780609A (en) * 2016-11-28 2017-05-31 中国电子科技集团公司第三研究所 Vision positioning method and vision positioning device
CN106780609B (en) * 2016-11-28 2019-06-11 中国电子科技集团公司第三研究所 Vision positioning method and vision positioning device
CN107293182A (en) * 2017-07-19 2017-10-24 深圳国泰安教育技术股份有限公司 A kind of vehicle teaching method, system and terminal device based on VR
CN108510545A (en) * 2018-03-30 2018-09-07 京东方科技集团股份有限公司 Space-location method, space orientation equipment, space positioning system and computer readable storage medium
US10872436B2 (en) 2018-03-30 2020-12-22 Beijing Boe Optoelectronics Technology Co., Ltd. Spatial positioning method, spatial positioning device, spatial positioning system and computer readable storage medium
CN108769668A (en) * 2018-05-31 2018-11-06 歌尔股份有限公司 Method for determining position and device of the pixel in VR display screens in camera imaging
CN111207747A (en) * 2018-11-21 2020-05-29 中国科学院沈阳自动化研究所 Spatial positioning method based on HoloLens glasses
CN111207747B (en) * 2018-11-21 2021-09-28 中国科学院沈阳自动化研究所 Spatial positioning method based on HoloLens glasses
WO2023060717A1 (en) * 2021-10-13 2023-04-20 中山大学 High-precision positioning method and system for object surface

Also Published As

Publication number Publication date
WO2017143745A1 (en) 2017-08-31

Similar Documents

Publication Publication Date Title
CN105631901A (en) Method and device for determining movement information of to-be-detected object
KR102497683B1 (en) Method, device, device and storage medium for controlling multiple virtual characters
US20200286288A1 (en) Method, device and medium for determining posture of virtual object in virtual environment
CN104904200B (en) Catch the unit and system of moving scene
CN107836012A (en) Mapping method between projection image generation method and its device, image pixel and depth value
CN108735052B (en) Augmented reality free fall experiment method based on SLAM
US11798223B2 (en) Potentially visible set determining method and apparatus, device, and storage medium
CN104204848B (en) There is the search equipment of range finding camera
CN108701362A (en) Obstacle during target following avoids
CN105704475A (en) Three-dimensional stereo display processing method of curved-surface two-dimensional screen and apparatus thereof
CN108668108B (en) Video monitoring method and device and electronic equipment
CN105934775A (en) Method and system for constructing virtual image anchored onto real-world object
CN106444846A (en) Unmanned aerial vehicle and method and device for positioning and controlling mobile terminal
WO2014062001A1 (en) Method and system for controlling virtual camera in virtual 3d space and computer-readable recording medium
US11373329B2 (en) Method of generating 3-dimensional model data
US12047674B2 (en) System for generating a three-dimensional scene of a physical environment
JP2017120556A (en) Head-mounted display for operation, control method of head-mounted display for operation, and program for head-mounted display for operation
US20190273945A1 (en) System and method for constructing optical flow fields
KR20210147033A (en) A method and apparatus for displaying a hotspot map, and a computer device and a readable storage medium
CN111724444B (en) Method, device and system for determining grabbing point of target object
CN105892638A (en) Virtual reality interaction method, device and system
CN109448117A (en) Image rendering method, device and electronic equipment
CA3170899A1 (en) Techniques for preloading and displaying high quality image data
WO2009100778A1 (en) Improved rotation independent face detection.
CN105243268A (en) Game map positioning method and apparatus as well as user terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160601