CN114519744B - Method, device and system for posture determination of wearing appliance - Google Patents

Method, device and system for posture determination of wearing appliance Download PDF

Info

Publication number
CN114519744B
CN114519744B CN202210413749.3A CN202210413749A CN114519744B CN 114519744 B CN114519744 B CN 114519744B CN 202210413749 A CN202210413749 A CN 202210413749A CN 114519744 B CN114519744 B CN 114519744B
Authority
CN
China
Prior art keywords
image
centroid
visible light
wearing
appliance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210413749.3A
Other languages
Chinese (zh)
Other versions
CN114519744A (en
Inventor
徐英伟
廖观万
宋炜
王方亮
王建平
周殿涛
吴继平
宋建华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wanlong Essential Technology Co ltd
Original Assignee
Beijing Wanlong Essential Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wanlong Essential Technology Co ltd filed Critical Beijing Wanlong Essential Technology Co ltd
Priority to CN202210413749.3A priority Critical patent/CN114519744B/en
Publication of CN114519744A publication Critical patent/CN114519744A/en
Application granted granted Critical
Publication of CN114519744B publication Critical patent/CN114519744B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/66Analysis of geometric attributes of image moments or centre of gravity

Abstract

The present disclosure relates to a method, apparatus and system for wearable appliance pose determination, the method for wearable appliance pose determination comprising: acquiring a first image and a second image for a visual indicia, wherein the visual indicia is disposed on the wearing article; segmenting the first image and the second image to obtain a first foreground region and a second foreground region corresponding to the visual marker respectively; determining a first set of connected regions for a first foreground region and a second set of connected regions for a second foreground region; calculating a first set of centroid points corresponding to each connected region in the first set of connected regions and a second set of centroid points corresponding to each connected region in the second set of connected regions; and determining a pose of the wearing appliance based on the first set of centroid points and the second set of centroid points. In this way, the accumulated error in the posture determination of the wearing appliance can be eliminated, and the measurement accuracy is improved.

Description

Method, device and system for posture determination of wearing appliance
Technical Field
The present disclosure relates generally to the field of wearable appliance pose determination, and in particular, to methods, apparatuses, and systems for wearable appliance pose determination.
Background
Current wearing appliances (such as helmets, safety helmets, earphones, VR headsets, smart watches) can be used not only as protection devices, but also as powerful information platforms. A wearable appliance as a powerful information platform needs to interact with an operating system. With the increasing sophistication and complexity of operating systems, the demands on the wearing gear are also increasing.
Taking a head-mounted device such as a helmet as an example, in a scene of being used together with a vehicle, a helmet display can guide a device on the vehicle such as an optoelectronic device in real time along with the rotation of the head of a driver of the vehicle, and perform operations such as target tracking. On the other hand, the environment outside the vehicle cabin at the current visual angle is displayed, so that a vehicle driver has immersion feeling to the surrounding environment, the eyes do not need to focus frequently, the visual fatigue is avoided, and the burden of the vehicle driver is greatly reduced.
One of the key technologies of the helmet-mounted display is the attitude measurement of the helmet, namely, the relative attitude angle between the helmet and the vehicle cabin is measured accurately and rapidly, and the visual angle of a vehicle driver can be known according to the attitude of the helmet, so that the proper influence can be transmitted to the helmet-mounted display, and the vehicle driver and equipment on the vehicle cabin can accurately sense the environment.
The method for determining the attitude of the carrier by using the inertial sensor is a method generally adopted by the current equipment, the inertial measurement has the unique advantages of independence on external information, high update rate, high reliability, good concealment, no external interference and the like, but the inertial measurement has the defect of accumulated errors, the measurement errors can be gradually accumulated along with the time, and the measurement errors are larger when the inertial measurement is applied to the field of long-time measurement.
Disclosure of Invention
The following presents a simplified summary of one or more aspects in order to provide a basic understanding of such aspects. This summary is not an extensive overview of all contemplated aspects, and is intended to neither identify key or critical elements of all aspects nor delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more aspects in a simplified form as a prelude to the more detailed description that is presented later.
According to an embodiment of the present disclosure, a solution for posture determination of a wearable appliance is provided.
In a first aspect of the present disclosure, a method for wearable appliance pose determination is provided. The method comprises the following steps: acquiring a first image and a second image for a visual indicia, wherein the visual indicia is disposed on the wearable appliance; segmenting the first image and the second image to obtain a first foreground region and a second foreground region corresponding to the visual marker respectively; determining a first set of connected regions for the first foreground region and a second set of connected regions for the second foreground region; calculating a first set of centroid points corresponding to each connected region in the first set of connected regions and a second set of centroid points corresponding to each connected region in the second set of connected regions; and determining a pose of the wearable appliance based on the first set of centroid points and the second set of centroid points.
In some embodiments, wherein determining the pose of the wearable appliance based on the first set of centroid points and the second set of centroid points further comprises: matching centroid points in the first centroid point set with corresponding centroid points in the second centroid point set to obtain matched centroid point pairs; and determining three-dimensional space coordinates of the centroid points based on the matched centroid point pairs to obtain the posture of the wearable appliance.
In some embodiments, the visual indicia comprises a visible light LED light, and the acquiring the first and second images for the visual indicia comprises: performing image statistics in response to the first visible light detector and the second visible light detector detecting the visual marker; performing exposure control on the first visible light detector and the second visible light detector based on the statistics; and acquiring a first image and a second image for the visual marker based on the exposure control.
In some embodiments, determining three-dimensional spatial coordinates of a centroid point based on the matched centroid point pair further comprises: respectively calculating an internal and external parameter matrix of the first visible light detector and an internal and external parameter matrix of the second visible light detector based on the first visible light detector and the second visible light detector; and determining three-dimensional space coordinates of the centroid point based on the internal and external parameter matrices of the first visible light detector and the internal and external parameter matrices of the second visible light detector and the matched centroid point pair.
In some embodiments, the wearing appliance comprises a helmet and the visible light LED lamp has a visible light wavelength of 940 nm.
In a second aspect of the present disclosure, an apparatus for wearable appliance pose determination is provided. The device includes: a visual marker image acquisition module configured to acquire a first image and a second image for a visual marker, wherein the visual marker is disposed on the wearable appliance; an image segmentation module configured to segment the first and second images to obtain first and second foreground regions corresponding to the visual indicia, respectively; a connected regions determination module configured to determine a first set of connected regions for the first foreground region and a second set of connected regions for the second foreground region; a centroid calculation module configured to calculate a first set of centroid points corresponding to respective connected regions of the first set of connected regions and a second set of centroid points corresponding to respective connected regions of the second set of connected regions; and a pose determination module configured to determine a pose of the wearable appliance based on the first set of centroid points and the second set of centroid points.
In some embodiments, the wearing appliance comprises a helmet and the device comprises an FPGA-based data processing unit binocular stereo measurement device.
In a third aspect of the present disclosure, there is provided a system for helmet pose determination for use with a vehicle, the system comprising: a data processing unit; a wearing instrument inertial measurement unit disposed on the wearing instrument, the wearing instrument including a visual marker; the vehicle cabin inertia measuring unit is arranged on or in a vehicle cabin of the vehicle; the device for determining the posture of the wearing appliance is arranged on or in a vehicle cabin and is used for visual calibration of the wearing appliance; the wearing appliance inertia measurement unit, the vehicle cabin inertia measurement unit and the vision calibration device are all electrically connected with the data processing unit.
In some embodiments, the helmet inertial measurement unit comprises at least one of a three-axis MEMS gyroscope, a three-axis MEMS accelerometer, and a three-axis magnetometer, and the cabin inertial measurement unit comprises a vehicle-mounted inertial navigation unit.
In some embodiments, the means for dressing implement pose determination comprises a binocular camera.
In a fourth aspect of the present disclosure, an electronic device for wearable appliance pose determination is provided. The electronic device includes: one or more processors; and memory for storing one or more programs that, when executed by the one or more processors, cause an electronic device to implement a method in accordance with the first aspect of the disclosure.
In a fifth aspect of the disclosure, a computer-readable medium is presented, on which a computer program is stored, which program, when executed by a processor, carries out the method according to the first aspect of the disclosure.
Various aspects according to embodiments of the present disclosure may have the following advantageous effects: the attitude determination method is simple and efficient, has strong applicability and has low requirements on hardware; the method can eliminate the accumulated error of the posture measurement of the wearing appliance and greatly improve the measurement precision; the hardware base of the measuring method is provided, and high-precision posture measurement of the wearing appliance can be achieved through corresponding development based on the hardware platform.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
Fig. 1 illustrates a schematic view of an example environment in which various embodiments of the present disclosure can be implemented, wherein (a) is a schematic view of an image capture device and (b) is a schematic view of a wearing appliance.
Fig. 2 is a flow chart illustrating a method for wearable gear pose determination according to some embodiments of the present disclosure.
Fig. 3 is an example illustrating detection by left and right visible light detectors using an FPGA-based data processing unit according to some embodiments of the present disclosure.
Fig. 4 is an integrated block diagram illustrating fast helmet pose determination according to some embodiments of the present disclosure.
Fig. 5 is an example of pixel neighborhood determination, according to some embodiments of the present disclosure.
Fig. 6 is a schematic diagram of an apparatus for wearable appliance pose determination, according to some embodiments of the present disclosure.
FIG. 7 illustrates a block diagram of a computing device capable of implementing various embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and the embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
In describing embodiments of the present disclosure, the terms "include" and its derivatives should be interpreted as being inclusive, i.e., "including but not limited to. The term "based on" should be understood as "based at least in part on". The term "one embodiment" or "the embodiment" should be understood as "at least one embodiment". The terms "first," "second," and the like may refer to different or the same objects. Other explicit and implicit definitions are also possible below.
For example, an element, or any portion of an element, or any combination of elements, may be implemented as a "processing system" that includes one or more processors. Examples of processors include microprocessors, microcontrollers, Graphics Processing Units (GPUs), Central Processing Units (CPUs), application processors, Digital Signal Processors (DSPs), Reduced Instruction Set Computing (RISC) processors, systems on chip (socs), baseband processors, Field Programmable Gate Arrays (FPGAs), Programmable Logic Devices (PLDs), state machines, gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functions described throughout this disclosure. One or more processors in the processing system may execute software. Software should be construed broadly to mean instructions, instruction sets, code segments, program code, programs, subprograms, software components, applications, software packages, routines, subroutines, wearing appliances, executables, threads of execution, procedures, functions, in other examples, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
Thus, in one or more examples, the functions described may be implemented in hardware, software, or any combination thereof. If implemented in software, the functions may be stored on or encoded in one or more instructions or code on a computer-readable medium. Computer-readable media includes computer storage media that may be referred to as non-transitory computer-readable media. The non-transitory computer readable medium may exclude transient signals. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media may comprise Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable ROM (EEPROM), optical disk storage, magnetic disk storage, other magnetic storage devices, combinations of the above types of computer-readable media, or any other medium that may be used to store computer-executable code in the form of instructions or data structures that may be accessed by a computer.
The technical scheme of the disclosure includes that a first detector (referred to as a first visible light detector or a left visible light detector) and a second detector (referred to as a second visible light detector or a right visible light detector) shoot a vision calibration device installed on a wearing tool, and a processing device calibrates the helmet posture when the helmet reaches a special position through image recognition technology through the shooting of the first detector and the second detector, so that MEMS accumulated errors are eliminated, and measurement accuracy is improved.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. Fig. 1 illustrates a schematic diagram of an example environment 100 in which various embodiments of the present disclosure can be implemented. The environment 100 may comprise an image acquisition apparatus 1 and a wearing appliance 3, the image acquisition apparatus 1 determining or measuring the pose of the wearing appliance 3 by acquiring respective images of visual markers 15 provided on the wearing appliance 3.
In some embodiments, referring to fig. 1, the wearing appliance 3 may be a helmet. It should be understood that the wearing device may also be other wearable devices, such as a safety helmet, an earphone, a VR headset, or a wearable device, such as a smart watch. In some embodiments, the helmet has visual indicia 15 disposed thereon.
In some embodiments, the visual indicia 15 may be a visible light LED lamp, for example, an LED lamp that emits visible light at a wavelength of 940 nm. It should be understood that the above-mentioned visual markers 15 implemented as LED lamps and the corresponding wavelengths of visible light are merely exemplary, and those skilled in the art can also select other visual markers according to actual needs, such as reflective markers and the like; other wavelengths of light may also be selected as desired, such as 900nm, 1000nm, etc., and the present disclosure is not limited thereto.
In some embodiments, with continued reference to fig. 1, the image acquisition device 1 may be a camera. For example, the image capturing device 1 may be a binocular camera, or more specifically, an FPGA hardware platform integrated device for pose determination of the binocular stereo wearing appliance 3. In such an embodiment, the image acquisition apparatus 1 may comprise a first detector 11 (or "first visible light detector", "left visible light detector") and a second detector 13 (or "second visible light detector", "right visible light detector") for detecting the image of the visual marker 15. In some embodiments, the first detector 11 and the second detector 13 may be visible light detectors, and accordingly, the visual indicia 15 may be visible light visual indicia. Such embodiments are described in further detail below.
In some embodiments, the visual indicia 15 may be provided on a helmet and the image capture device 1 may be provided in or on the cabin of the vehicle. In this way, the image capture device 1 can measure the posture of the wearing device 3 by the visual marker 15. It should be noted that the visual mark 15 may be disposed in or on the vehicle cabin, and accordingly, the image capturing device 1 is disposed on the helmet, so that the same posture measurement is realized by the same operation, which is not limited by the present disclosure. The determination of the posture of the wearing appliance is described in detail below with reference to fig. 2. For ease of description, the following discussion will be in conjunction with environment 100 shown in FIG. 1.
Fig. 2 is a flow chart illustrating a method for wearable gear pose determination according to some embodiments of the present disclosure. The method may be implemented in environment 100.
As shown in fig. 2, at block 201, a first image and a second image are acquired for a visual indicia, wherein the visual indicia is disposed on the wearable appliance.
In some embodiments, in conjunction with fig. 1, a first image may be obtained by the first detector 11 and a second image may be obtained by the second detector 13. It should be understood that the second image may also be obtained by the first detector 11 and the first image may also be obtained by the second detector 13, as the present disclosure is not limited in this regard. Wherein the visual marking 15 is provided on the wearing appliance 3. Specifically, the visual mark 15 may be disposed at any suitable position outside the wearing device 3, as long as the corresponding visual mark effect can be achieved, and the present disclosure does not limit this.
Fig. 3 provides an example of first and second photodetectors detecting with an FPGA-based data processing unit to obtain first and second images, according to some embodiments of the present disclosure. As shown in fig. 3, a first detector 11 (i.e., a "left visible light detector") and a second detector 13 (i.e., a "right visible light detector") may be communicatively coupled to the FPGA-based data processing unit. The FPGA-based data processing unit may be, for example, an FPGA chip. In such an embodiment, when the user wears the wearing device 3 (e.g. helmet) and the visual mark can be captured by the visible light detector in the FPGA hardware platform integrated hardware platform based on the binocular stereo vision wearing device posture rapid measurement, the image counting module in the FPGA-based data processing unit performs image counting to control the exposure duration of the visible light detector, obtain the appropriate exposure time, obtain a clear visual mark image, which is more clearly shown in fig. 4. In addition, the FPGA-based data processing unit may also perform other processing operations of FIG. 4, which will be described in detail below in conjunction with FIG. 4. In addition, with continued reference to fig. 3, after the FPGA-based data processing unit completes the data processing, the processed result may be output as the light spot three-dimensional coordinates, thereby obtaining the posture of the wearing appliance.
It should be noted that the FPGA-based data processing unit may be any processing unit, chip or other processing device capable of obtaining the visual marker image, and the disclosure is not limited thereto.
It should be noted that although the image statistics module is shown as two modules in fig. 4, the image statistics module may be a single module, which is not limited by the present disclosure.
At block 203, the first image and the second image are segmented to obtain a first foreground region and a second foreground region corresponding to the visual indicia, respectively.
The process of segmenting the first image and the second image is described below in conjunction with fig. 4. Fig. 4 is an integrated block diagram illustrating fast helmet pose determination according to some embodiments of the present disclosure.
In some embodiments, referring to fig. 4, the adaptive threshold segmentation module segments the first image and the second image, respectively, and segments the visual marker points from the background to obtain a first foreground region and a second foreground region corresponding to the visual marker. It should be noted that although the adaptive threshold splitting module is shown as two modules in fig. 4, the adaptive threshold splitting module may be a single module, which is not limited by the present disclosure.
In some embodiments, the Otsu (OTSU) adaptive threshold segmentation algorithm may be employed. The method can be used for calculating the effect which is simple and is not influenced by the brightness and the contrast of the image. Specifically, the method divides the image into a background part and a foreground part according to the gray characteristic of the image. Since the variance is a measure of the uniformity of the gray distribution, the larger the inter-class variance between the background and the foreground is, the larger the difference between the two parts constituting the image is, and the smaller the difference between the two parts is when part of the foreground is mistaken for the background or part of the background is mistaken for the foreground. Thus, a segmentation that maximizes the inter-class variance means that the probability of false positives is minimized.
Further, in one embodiment, the OTSU algorithm assumes that there is a threshold TH to classify all pixels of the image into two classes C1(less than TH) and C2(greater than TH), the mean of these two types of pixels is m1、m2Global mean of image is mG. While the pixel is divided into C1And C2The probability of a class is p1、p2. Thus, the following results were obtained:
Figure 499874DEST_PATH_IMAGE001
(1)
Figure 682594DEST_PATH_IMAGE002
(2)
according to the concept of variance, the inter-class variance expression is:
Figure 540960DEST_PATH_IMAGE003
(3)
after simplification, formula (1) is substituted into formula (3), and the following can be obtained:
Figure 201748DEST_PATH_IMAGE004
(4)
thus, the gray level K that maximizes the above equation is the OTSU threshold as long as the gray value is traversed, thereby separating the lighter foreground region from the darker background region.
It should be noted that the OTSU method described above is merely exemplary, and those skilled in the art can also use any suitable method in the field of image segmentation to distinguish between the foreground region and the background region, and the disclosure is not limited thereto.
It should be noted that although the adaptive threshold splitting module is shown as two modules in fig. 4, the adaptive threshold splitting module may be a single module, which is not limited by the present disclosure.
In this way, a first foreground region and a second foreground region corresponding to the visual marker, respectively, can be obtained.
At block 205, a first set of connected regions for a first foreground region and a second set of connected regions for a second foreground region are determined.
After separating the object from the background, each connected region needs to be labeled. In some embodiments, with continued reference to fig. 4, the multiple centroid region segmentation labeling module labels and segments the number of centroid regions separately. Specifically, the 4-neighborhood relationship shown in fig. 5 may be employed as the pixel neighborhood relationship decision rule.
FIG. 5 is an example of a 4-neighborhood pixel neighborhood determination, according to some embodiments of the present disclosure. Fig. 5 shows that the neighborhood can be determined according to the 4 neighborhood pixels of a certain pixel. Specifically, in conjunction with fig. 4 and 5, all connected regions can be determined by the following algorithm flow:
firstly, scanning an image until a pixel value B (x, y) = =1 at a current pixel point;
secondly, taking B (x, y) as a seed (pixel position), endowing a label, and pressing all foreground pixels adjacent to the seed into a stack according to a 4-neighborhood rule;
thirdly, popping up the top pixel, endowing the top pixel with the same label, and then pressing all foreground pixels adjacent to the top pixel into a stack;
fourthly, repeating the third step until the stack is empty;
at this point, a connected region in image B is found, and the pixel values in this region are labeled as label;
finally, continuously repeating the first step to the fourth step until the scanning is finished; after the scanning is finished, all connected regions in the image can be obtained.
It should be noted that the above flow of obtaining the connected region is only exemplary, and a 6-neighborhood relationship and an 8-neighborhood relationship may also be used as a pixel neighborhood relationship determination rule, and any other suitable algorithm may be used as a rule, which is not limited by the present disclosure.
At block 207, a first set of centroid points corresponding to each connected region in the first set of connected regions and a second set of centroid points corresponding to each connected region in the second set of connected regions are calculated.
After the connected domains are divided, the centroid position of each connected domain needs to be calculated. In some embodiments, with continued reference to fig. 4, the centroid calculation module performs centroid point calculations within each centroid region.
Specifically, the centroid point for each connected region can be calculated using the following equation:
Figure 547279DEST_PATH_IMAGE005
(5)
Figure 150167DEST_PATH_IMAGE006
(6)
wherein xkAnd ykIs the x-coordinate and y-coordinate of each pixel point in the connected domain, and n is the number of pixels in the connected domain. In embodiments where the wearing appliance is a helmet, the center position of the LED lamp on the helmet can be obtained. It should be noted that the above-mentioned manner of calculating the centroid point is merely exemplary, and any manner in which the centroid point can be calculated can be applied to the present disclosure. It should also be noted that although the centroid calculation module is shown as two modules in fig. 4, the centroid calculation module may be a single module, which is not limited by the present disclosure.
At block 209, a pose of the wearing appliance is determined based on the first set of centroid points and the second set of centroid points.
In some embodiments, with continued reference to fig. 4, the three-dimensional coordinate position of the centroid point, i.e., the pose of the wearing appliance 3, may be directly calculated from the internal and external reference matrices of the first and second detectors for the matching centroid point pair.
For example, matching a centroid point in the first set of centroid points with a corresponding centroid point in the second set of centroid points results in a matched pair of centroid points, and based on the matched pair of centroid points, three-dimensional spatial coordinates of the centroid point are determined to result in the posture of the wearable appliance 3.
More specifically, the posture of the wearing appliance 3 can be determined using the following three-dimensional intermediate process equation.
Figure 948359DEST_PATH_IMAGE008
(7)
Figure 678418DEST_PATH_IMAGE009
(8)
Where u, v are the pixel positions of the object on the camera screen, XW、YW、ZWIs the position of the object in the world coordinate system. The world homogeneous coordinates of the object are transformed through a camera internal reference matrix formed by parameters such as pixel sizes dx and dy, a focal length f and the like, and are transformed through an external reference matrix formed by a rotation matrix R and a translation matrix T to obtain the secondary coordinates of the screen pixels of the object. All transformation matrixes are integrated into a homography matrix P, and P has 12 parameters and is known through internal and external parameter calibration.
The reprojection matrix Q enables a conversion between the world coordinate system and the pixel coordinate system. The internal and external parameters can be calculated by using image correction through a binocular camera. A specific flow may be shown in an exemplary manner in fig. 4. Thus, using the stereo correction Bouguet algorithm for binocular image correction in conjunction with equations (7) and (8), one can further derive:
Figure 425925DEST_PATH_IMAGE010
(9)
wherein
Figure 153710DEST_PATH_IMAGE011
Is the coordinate of the principal point on the right image, and all other parameters are derived from the left image cx、cyIs the coordinate of the principal point of the left image, TxAre coefficients. If principal points at infinity intersect, the bottom right corner of the matrix is 0. Given the next pixel coordinates u, v of a point and its disparity d, the point can be changed to the three-dimensional world coordinates X/W, Y/W, Z/W, W being the calculated next coefficient, by Q.
Figure 439197DEST_PATH_IMAGE012
(10)
Finally, the obtained three-dimensional reconstruction coordinates are:
Figure 707368DEST_PATH_IMAGE013
(11)
a method for wearable appliance pose determination according to an embodiment of the present disclosure has been described thus far. By the mode, the posture of the wearing appliance can be determined simply and efficiently, and the method is high in applicability and does not need to depend on hardware excessively; the method can eliminate the accumulated error of the posture measurement of the wearing appliance and greatly improve the measurement precision.
Fig. 6 is a schematic diagram of an apparatus 600 for wearable appliance pose determination, according to some embodiments of the present disclosure. The apparatus may implement the method shown in fig. 2.
As shown in fig. 6, the apparatus 600 may include a visual marker image acquisition module 601, the visual marker image acquisition module 601 configured to acquire a first image and a second image for a visual marker, wherein the visual marker is disposed on the wearing article. The apparatus 600 may comprise an image segmentation module 603, the image segmentation module 603 being configured to segment the first image and the second image to obtain a first foreground region and a second foreground region, respectively, corresponding to the visual marker. The apparatus 600 further comprises a connected region determination module 605 configured to determine a first set of connected regions for the first foreground region and a second set of connected regions for the second foreground region 605. The apparatus 600 may further comprise a centroid calculation module 607 configured to calculate a first set of centroid points corresponding to respective connected regions of the first set of connected regions and a second set of centroid points corresponding to respective connected regions of the second set of connected regions. The apparatus 600 further comprises a pose determination module 609, the pose determination module 609 configured to determine a pose of the wearing appliance based on the first set of centroid points and the second set of centroid points.
According to some embodiments of the present disclosure, a wearing appliance for an apparatus for wearing appliance pose determination may include a helmet, and apparatus 600 may include an FPGA-based data processing unit binocular stereo measurement apparatus.
Further, in some embodiments, on the basis of the apparatus 600, a system for posture determination of a wearing appliance may be provided. In some embodiments, the system may be used with a vehicle. It should be noted that the system can also be used in other usage scenarios, such as navigation, aviation, etc., and the present disclosure is not limited thereto.
In an embodiment for use with a vehicle, the system may comprise: the data processing unit is used for processing corresponding data in the whole processing process; the wearing appliance inertia measurement unit is arranged on the wearing appliance, and the wearing appliance comprises a visual mark; the vehicle cabin inertia measuring unit is arranged on or in a vehicle cabin of the vehicle; and an apparatus 600 for pose determination of a wearing appliance, the apparatus 600 may be disposed on or in a vehicle cabin for visual calibration of the wearing appliance. Wherein the wearing appliance inertia measurement unit, the vehicle cabin inertia measurement unit and the vision calibration device are all electrically connected with the data processing unit. In this way, the data processing unit can process data between each unit and the device, and the whole process of attitude measurement is completed.
In some embodiments, the wearable gear inertial measurement unit in the system may include at least one of a three-axis MEMS gyroscope, a three-axis MEMS accelerometer, and a three-axis magnetometer. In some embodiments, the cabin inertial measurement unit may comprise an on-board inertial navigation unit. It should be noted that the vehicle cabin inertia measurement unit and the wearing appliance inertia measurement unit are only exemplary, and any other device, apparatus or element capable of achieving the function may be used to achieve the corresponding function.
In some embodiments, as previously indicated, the means for dressing the implement pose determination may comprise a binocular camera. The device can be specifically an FPGA hardware platform integrated device for determining the posture of a binocular stereo vision wearable instrument.
It should be understood that each unit recited in the apparatus 600 corresponds to each step of the method 200 with reference to fig. 2. Moreover, the operations and features of the apparatus 600 and the units included therein all correspond to the operations and features described above in connection with fig. 2 and have the same effects, and detailed details are not repeated.
The elements included in apparatus 600 may be implemented in a variety of ways including software, hardware, firmware, or any combination thereof. In some embodiments, one or more of the units may be implemented using software and/or firmware, such as machine executable instructions stored on a storage medium. In addition to, or in the alternative to, machine-executable instructions, some or all of the elements in apparatus 600 may be implemented at least in part by one or more hardware logic components. By way of example, and not limitation, exemplary types of hardware logic components that may be used include Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standards (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and so forth.
The elements shown in fig. 6 may be implemented partially or wholly as hardware modules, software modules, firmware modules, or any combination thereof. In particular, in certain embodiments, the processes, methods, or procedures described above may be implemented by hardware in a storage system or a host corresponding to the storage system or other computing device independent of the storage system.
Fig. 7 illustrates a schematic block diagram of an example device 700 that may be used to implement embodiments of the present disclosure. Device 700 may be used to implement computing device 140. As shown, device 700 includes a Central Processing Unit (CPU)701 that may perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM)702 or computer program instructions loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 can also be stored. The CPU701, ROM 702, and RAM 703 are connected to each other via a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, a modem, a wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processing unit 701 performs the various methods and processes described above, such as the method 200. For example, in some embodiments, the method 200 may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 708. In some embodiments, part or all of a computer program may be loaded onto and/or installed onto device 700 via ROM 702 and/or communications unit 709. When the computer program is loaded into the RAM 703 and executed by the CPU701, one or more steps of the method 200 described above may be performed. Alternatively, in other embodiments, the CPU 801 may be configured to perform the method 200 in any other suitable manner (e.g., by way of firmware).
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, causes the functions/acts specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In summary, the present disclosure provides a method, an apparatus, and a system for determining a posture of a wearable device, which provide a hardware basis for eliminating accumulated errors in measurement of a posture of a helmet, and can perform corresponding development based on the hardware platform to achieve high-precision measurement of the posture of the helmet.
Further, while operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (10)

1. A method for wearable appliance pose determination, comprising:
acquiring a first image and a second image for a visual indicia, wherein the visual indicia is disposed on the wearable appliance;
segmenting the first image and the second image to obtain a first foreground region and a second foreground region respectively corresponding to the visual marker;
determining a first set of connected regions for the first foreground region and a second set of connected regions for the second foreground region;
calculating a first set of centroid points corresponding to each connected region in the first set of connected regions and a second set of centroid points corresponding to each connected region in the second set of connected regions; and
determining a pose of the wearable appliance based on the first set of centroid points and the second set of centroid points.
2. The method of claim 1, wherein determining the pose of the wearable appliance based on the first set of centroid points and the second set of centroid points further comprises:
matching centroid points in the first centroid point set with corresponding centroid points in the second centroid point set to obtain matched centroid point pairs; and
determining three-dimensional space coordinates of the centroid points based on the matched centroid point pairs to obtain a pose of the wearable appliance.
3. The method of claim 2, wherein the visual indicia comprises a visible light LED lamp, and wherein acquiring the first and second images for the visual indicia comprises:
performing image statistics in response to the first visible light detector and the second visible light detector detecting the visual marker;
performing exposure control on the first visible light detector and the second visible light detector based on the statistics; and
based on the exposure control, a first image and a second image for a visual marker are acquired.
4. The method of claim 3, wherein determining three-dimensional spatial coordinates of a centroid point based on the matched centroid point pair further comprises:
respectively calculating an internal and external parameter matrix of the first visible light detector and an internal and external parameter matrix of the second visible light detector based on the first visible light detector and the second visible light detector; and
determining three-dimensional space coordinates of centroid points based on the internal and external reference matrices of the first visible light detector and the internal and external reference matrices of the second visible light detector and the matched centroid point pairs.
5. The method of claim 3 or 4, wherein the wearing device comprises a helmet and the visible light LED lamp has a visible light wavelength of 940 nm.
6. An apparatus for posture determination of a wearing appliance, comprising:
a visual marker image acquisition module configured to acquire a first image and a second image for a visual marker, wherein the visual marker is disposed on the wearable appliance;
an image segmentation module configured to segment the first and second images to obtain first and second foreground regions corresponding to the visual indicia, respectively;
a connected regions determination module configured to determine a first set of connected regions for the first foreground region and a second set of connected regions for the second foreground region;
a centroid calculation module configured to calculate a first set of centroid points corresponding to respective connected regions of the first set of connected regions and a second set of centroid points corresponding to respective connected regions of the second set of connected regions; and
a pose determination module configured to determine a pose of the wearable appliance based on the first set of centroid points and the second set of centroid points.
7. The apparatus of claim 6, wherein the wearing instrument comprises a helmet and the apparatus comprises an FPGA-based data processing unit binocular stereo measurement apparatus.
8. A system for posture determination of a wearing appliance for use with a vehicle, the system comprising:
a data processing unit;
a wearing instrument inertial measurement unit disposed on the wearing instrument, the wearing instrument including a visual marker;
the vehicle cabin inertia measuring unit is arranged on or in a vehicle cabin of the vehicle; and
the apparatus for posture determination of a wearing appliance of any one of claims 6 to 7, provided on or in a vehicle cabin for visual calibration of the wearing appliance;
the wearing appliance inertia measurement unit, the vehicle cabin inertia measurement unit and the visual calibration device are all electrically connected with the data processing unit.
9. The system of claim 8, wherein the wearable implement inertial measurement unit comprises at least one of a three-axis MEMS gyroscope, a three-axis MEMS accelerometer, and a three-axis magnetometer, and the vehicle cabin inertial measurement unit comprises an on-board inertial navigation unit.
10. The system of claim 8 or 9, wherein the means for dressing implement pose determination comprises a binocular camera.
CN202210413749.3A 2022-04-20 2022-04-20 Method, device and system for posture determination of wearing appliance Active CN114519744B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210413749.3A CN114519744B (en) 2022-04-20 2022-04-20 Method, device and system for posture determination of wearing appliance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210413749.3A CN114519744B (en) 2022-04-20 2022-04-20 Method, device and system for posture determination of wearing appliance

Publications (2)

Publication Number Publication Date
CN114519744A CN114519744A (en) 2022-05-20
CN114519744B true CN114519744B (en) 2022-06-21

Family

ID=81600287

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210413749.3A Active CN114519744B (en) 2022-04-20 2022-04-20 Method, device and system for posture determination of wearing appliance

Country Status (1)

Country Link
CN (1) CN114519744B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106233328A (en) * 2014-02-19 2016-12-14 埃弗加泽公司 For improving, improve or strengthen equipment and the method for vision
CN108917746A (en) * 2018-07-26 2018-11-30 中国人民解放军国防科技大学 helmet posture measuring method, measuring device and measuring system
CN109949361A (en) * 2018-12-16 2019-06-28 内蒙古工业大学 A kind of rotor wing unmanned aerial vehicle Attitude estimation method based on monocular vision positioning
US10456915B1 (en) * 2019-01-25 2019-10-29 Mujin, Inc. Robotic system with enhanced scanning mechanism
CN110674751A (en) * 2019-09-25 2020-01-10 东北大学 Device and method for detecting head posture based on monocular camera
CN113554704A (en) * 2020-10-30 2021-10-26 江苏大学 Electronic component positioning method based on improved SURF algorithm
WO2022040970A1 (en) * 2020-08-26 2022-03-03 南京翱翔信息物理融合创新研究院有限公司 Method, system, and device for synchronously performing three-dimensional reconstruction and ar virtual-real registration

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9524647B2 (en) * 2015-01-19 2016-12-20 The Aerospace Corporation Autonomous Nap-Of-the-Earth (ANOE) flight path planning for manned and unmanned rotorcraft

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106233328A (en) * 2014-02-19 2016-12-14 埃弗加泽公司 For improving, improve or strengthen equipment and the method for vision
CN108917746A (en) * 2018-07-26 2018-11-30 中国人民解放军国防科技大学 helmet posture measuring method, measuring device and measuring system
CN109949361A (en) * 2018-12-16 2019-06-28 内蒙古工业大学 A kind of rotor wing unmanned aerial vehicle Attitude estimation method based on monocular vision positioning
US10456915B1 (en) * 2019-01-25 2019-10-29 Mujin, Inc. Robotic system with enhanced scanning mechanism
CN110674751A (en) * 2019-09-25 2020-01-10 东北大学 Device and method for detecting head posture based on monocular camera
WO2022040970A1 (en) * 2020-08-26 2022-03-03 南京翱翔信息物理融合创新研究院有限公司 Method, system, and device for synchronously performing three-dimensional reconstruction and ar virtual-real registration
CN113554704A (en) * 2020-10-30 2021-10-26 江苏大学 Electronic component positioning method based on improved SURF algorithm

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
an improvement of pose measurement method using global control points calibration;changku sun等;《plos one》;20150724;1-16 *
一种手部实时跟踪与定位的虚实碰撞检测方法;李岩等;《计算机辅助设计与图形学学报》;20110415(第04期);163-168 *
基于双目视觉的运动目标跟踪与测量;张娟等;《计算机工程与应用》;20090901(第25期);195-198 *
基于摄影测量法的头盔运动姿态测量技术;张虎龙等;《光电工程》;20111015;第28卷(第10期);1-5 *
多传感器集成车载移动测量设备位姿标定方法研究;李少甫;《中国优秀博硕士学位论文全文数据库(硕士)基础科学辑》;20220115(第1期);A008-299 *
快速区域质心图像匹配算法;胡敏等;《电子测量与仪器学报》;20110515(第05期);73-80 *
直升机驾驶员夜间飞行观察导航系统;纪明等;《应用光学》;20091115(第06期);9-14 *
空间机械臂位姿测量中合作靶标的识别及定位;何龙等;《现代电子技术》;20180703(第13期);114-118+122 *

Also Published As

Publication number Publication date
CN114519744A (en) 2022-05-20

Similar Documents

Publication Publication Date Title
CN110322500B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN111354042B (en) Feature extraction method and device of robot visual image, robot and medium
KR102298378B1 (en) Information processing device, information processing method, and program
EP3226208A1 (en) Information processing device and computer program
CN106908064B (en) Indoor night vision navigation method based on Kinect2 sensor
CN110349212B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
US20180108149A1 (en) Computer program, object tracking method, and object tracking device
JP2006285788A (en) Mixed reality information generation device and method
CN110530356B (en) Pose information processing method, device, equipment and storage medium
CN111210477A (en) Method and system for positioning moving target
JP2016109669A (en) Information processing device, information processing method, program
US20170039718A1 (en) Information processing apparatus, information processing method, and storage medium
CN109155055B (en) Region-of-interest image generating device
CN110895676B (en) dynamic object tracking
US20200081249A1 (en) Internal edge verification
JP2018084954A (en) Program, pose derivation method, and pose derivation device
US20230300464A1 (en) Direct scale level selection for multilevel feature tracking under motion blur
CN114360043B (en) Model parameter calibration method, sight tracking method, device, medium and equipment
WO2015119657A1 (en) Depth image generation utilizing depth information reconstructed from an amplitude image
CN114299162A (en) Rapid calibration method for AR-HUD
CN114519744B (en) Method, device and system for posture determination of wearing appliance
CN109740526A (en) Signal lamp recognition methods, device, equipment and medium
CN108259886A (en) Deduction system, presumption method and program for estimating
JP2004126905A (en) Image processor
KR20150014342A (en) Apparatus and method for analyzing an image including event information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant