CN109543576A - Train driver detection method based on bone detection and three-dimensional reconstruction - Google Patents

Train driver detection method based on bone detection and three-dimensional reconstruction Download PDF

Info

Publication number
CN109543576A
CN109543576A CN201811330010.6A CN201811330010A CN109543576A CN 109543576 A CN109543576 A CN 109543576A CN 201811330010 A CN201811330010 A CN 201811330010A CN 109543576 A CN109543576 A CN 109543576A
Authority
CN
China
Prior art keywords
coordinate system
driver
bone
coordinate
video sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811330010.6A
Other languages
Chinese (zh)
Inventor
王正友
王长明
张泽文
郭旭峰
黄正能
马丽琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shijiazhuang Bumu Electronics Co Ltd
Shijiazhuang Tiedao University
Original Assignee
Shijiazhuang Bumu Electronics Co Ltd
Shijiazhuang Tiedao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shijiazhuang Bumu Electronics Co Ltd, Shijiazhuang Tiedao University filed Critical Shijiazhuang Bumu Electronics Co Ltd
Priority to CN201811330010.6A priority Critical patent/CN109543576A/en
Publication of CN109543576A publication Critical patent/CN109543576A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Architecture (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a kind of train driver detection method based on bone detection and three-dimensional reconstruction, the method is related to image processing method technical field.Described method includes following steps: getting the original video sequence of train driver;The interference that illumination is eliminated to the train driver original video sequence got, obtains clearly video sequence;According to pretreated video sequence, obtain driver's two dimension bone information, then using camera marking method obtain monitoring device intrinsic parameter and outer parameter, and the three-dimensional coordinate of each artis of driver is obtained according to the intrinsic parameter of video camera and outer parameter and human skeleton model, export driver's three-dimensional bone video sequence.The method has many advantages, such as that detection accuracy is high and method is simple.

Description

Train driver detection method based on bone detection and three-dimensional reconstruction
Technical field
The present invention relates to image processing method technical field more particularly to a kind of column based on bone detection and three-dimensional reconstruction Vehicle driver's detection method.
Background technique
In the operation or work standard of once starting a work shift of train, it is often required that train driver is made while executing certain operations Corresponding movement, this movement are normally defined the particular sequence of upper arm and gesture.In order to ensure driver be according to standardize into Row operation, it is necessary to which these specific actions of train driver are recorded.At present frequently with the mode of artificial viewing video recording It carries out Activity recognition, takes considerable time and human cost.
Summary of the invention
The technical problem to be solved by the present invention is to how to provide, a kind of detection accuracy is high, method is simply based on bone inspection Survey the train driver detection method with three-dimensional reconstruction.
In order to solve the above technical problems, the technical solution used in the present invention is: a kind of based on bone detection and Three-dimensional Gravity The train driver detection method built, it is characterised in that include the following steps:
Get the original video sequence of train driver;
The interference that illumination is eliminated to the train driver original video sequence got, obtains clearly video sequence;
According to pretreated video sequence, driver's two dimension bone information is obtained, then uses camera marking method Obtain monitoring device intrinsic parameter and outer parameter, and obtained according to the intrinsic parameter of video camera and outer parameter and human skeleton model The three-dimensional coordinate of each artis of driver exports driver's three-dimensional bone video sequence.
A further technical solution lies in: the monitoring device or driver installed from train driving room are onboard The original video sequence of train driver is got in monitoring device.
A further technical solution lies in: the train driver original video sequence got is migrated using color space The interference that method eliminates illumination obtains clearly video sequence.
A further technical solution lies in the color space moving method includes the following steps:
It is transformed into LMS color space by rgb color space,
L α β color space is transformed by LMS color space later,
In l α β color space, each color channel is substantially independent, can carry out respectively to three channel images independent Operation, the information without modifying other two channel.
A further technical solution lies in: the selection normal frame of illumination regards the extraction position of color space, will be in video The train color space that the very low frame of intensity of illumination extracts before when driving in tunnel is replaced;It similarly, will be special The position of highlight area in frame is replaced with the corresponding color space of reasonable frame.
Preferably, the two-dimentional bone information of driver is obtained using OpenPose detection method.
A further technical solution lies in the scaling method of the video camera is as follows:
One point A (X of real worlda,Ya,Za) after through the transformation of one 3 × 4 projection matrix in image coordinate system Coordinate be (Ua,Va):
(u, v, 1)~P (X, Y, Z, 1)T (3)
Wherein, matrix P can be broken down into three matrixes: Intrinsic Matrix, include five intrinsic parameters, be respectively as follows: X-direction Focal length fx, Y-direction focal length fy, Z-direction focal length fz, projection centre coordinate (cx,cy) and inclination S;Spin matrix includes 3 outsides Parameter is respectively as follows: the rotation angle roll around Z axis, around the rotation angle pitch of X-axis, around the rotation angle yaw of Y-axis;Translation Matrix includes the other three external parameter, is respectively as follows: the displacement t along X-axisx, along the displacement t of Y-axisy, along the displacement of Z axis tz;Relationship between parameters is as follows:
P=K [R | t] (4)
WhereinR=Rz·Rx·Ry,
Described intrinsic parameter roll, pitch, yaw, fx、fyInitial value calculated to obtain by vanishing point, and S is defaulted as 0; Origin is established on ground level and the vertical line passes through video camera, the t described in this wayxAnd tzIt is set to 0, tyIt is assumed to known The approximation of camera heights.
A further technical solution lies in: when being demarcated to the video camera from compartment select two pairs it is orthogonal parallel Line hangs up a following scaling board in compartment.
A further technical solution lies in: each artis of upper half of human body skeleton pattern shares 18 freedom degrees, including complete Each 3 freedom degrees of 3 freedom degrees of translation of office, global 3 freedom degrees of rotation, head 3 freedom degrees, shoulder joint or so, elbow close Save each 1 freedom degree in left and right and 1 freedom degree of abdomen.
A further technical solution lies in skeleton pattern Coordinate calculation method is as follows:
The coordinate of each artis calculates the conversion that use between following coordinate system: image coordinate system I in skeleton pattern, closes Save local coordinate system L, target global coordinate system H, camera coordinate system C, world coordinate system W.Wherein camera coordinate system C and generation Relationship between boundary coordinate system W is determined by camera calibration.Target global coordinate system H is built upon the seat on skeleton pattern Mark system;Human body is other than whole movement, and there are also each limbs to carry out a certain range of rotation around corresponding joint, therefore At each body part, using a certain artis as origin and local coordinate system L is established respectively;Limbs portion is indicated with Eulerian angles The rotation of position, in the skeleton pattern that Eulerian angles indicate, the coordinate transform formula of child node c and father node f be can be described as:
Mcf=T (tx,ty,tz)Rz(γ)Ry(β)Rx(α) (8)
Wherein T (tx,ty,tz) it is translation matrix of the child node relative to father node,
Rx(α)、Ry(β)、Rz(λ) is child node coordinate system respectively around father node coordinate system X, Y, the spin moment of Z-direction Battle array:
Assuming that coordinate of a certain artis in corresponding child node local coordinate system is p=(x, y, z, 1), then basis Coordinate transform formula (8), the coordinate P=[X, Y, Z, 1] in father node coordinate system can be expressed as form:
After having obtained the variation relation between every layer of coordinate system child node, father node, the cascade converted by multilayer, meter Position of a certain skeleton point of human body in global coordinate system is calculated, and then finds out position of this in world coordinate system, is obtained In driver's video, coordinate of the driver in each artis at each moment.
The beneficial effects of adopting the technical scheme are that the method is first to the original video sequence of acquisition Noise reduction is carried out, noise jamming is preferably minimized, the accuracy of step after improving.The method utilizes after pretreatment finishes The bone detection technique of OpenPose, using the real-time behavior of the entire skeleton character information analysis driver of driver, then Followed by Three Dimensional Reconfiguration, the three-dimensional coordinate of driver's skeleton point is found, thus obtain each artis of driver Spatial information, method is simple, easy to accomplish.
Detailed description of the invention
The present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments.
Fig. 1 is the flow chart of the method for the embodiment of the present invention;
Fig. 2 is the relational graph of l- α in the space l α β in the embodiment of the present invention;
Fig. 3 is the relational graph of l- β in the space l α β in the embodiment of the present invention;
Fig. 4 is the relational graph of alpha-beta in the space l α β in the embodiment of the present invention;
Fig. 5 is perspective projection illustraton of model in the embodiment of the present invention;
Fig. 6 is the schematic diagram of scaling board in the embodiment of the present invention;
Fig. 7 is upper half of human body skeleton model figure in the embodiment of the present invention.
Specific embodiment
With reference to the attached drawing in the embodiment of the present invention, technical solution in the embodiment of the present invention carries out clear, complete Ground description, it is clear that described embodiment is only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
In the following description, numerous specific details are set forth in order to facilitate a full understanding of the present invention, but the present invention can be with Implemented using other than the one described here other way, those skilled in the art can be without prejudice to intension of the present invention In the case of do similar popularization, therefore the present invention is not limited by the specific embodiments disclosed below.
As shown in Figure 1, the embodiment of the invention discloses a kind of train drivers based on bone detection and three-dimensional reconstruction to examine Survey method, described method includes following steps:
Firstly, being got from the onboard monitoring device of the monitoring device or driver installed in train driving room The original video sequence of driver;
Then the interference for eliminating illumination using color space moving method to the original video sequence got, it is clear to obtain Video sequence;
For pretreated video sequence, driver's two dimension bone information is obtained using OpenPose detection method, so Afterwards using camera marking method obtain monitoring device intrinsic parameter and outer parameter, and according to camera parameters and skeleton mould Type obtains the three-dimensional coordinate of each artis of driver, exports driver's three-dimensional bone video sequence.
The above method is described in detail in content in detail below for the present embodiment combination.
Overcome method part strong light interference in video and improve brightness in tunnel:
Video in the method there is the interference of illumination (the strong illumination of image local, entire image illumination cataclysm), Directly good testing result cannot be obtained using bone point extracting method.Therefore, by the color balance of entire image.By Correlation in the common each channel of RGB color is very high, cannot effectively carry out the migration of color, Ruderman et al. By carrying out distribution of color statistics to a large amount of nature image, the distribution of color statistical result of image has been obtained, and with converting The method of color space forms the uniform color space l α β in a statistical significance with nearly orthogonal base, and gives simple 3 × 3 matrix operations realize the space RGB to l α β conversion.(as in Figure 2-4).LMS color is transformed by rgb color space Color space,
L α β color space is transformed by LMS later,
It can be found that in l α β color space, each color channel is substantially independent, can respectively to three channel images into The independent operation of row, the information without modifying other two channel.
By color transfer means above, the extraction position that the normal frame of illumination regards color space is chosen, by video The middle train color space that the very low frame of intensity of illumination extracts before when driving in tunnel is replaced.It similarly, will be special The position of highlight area in different frame, is replaced with the corresponding color space of reasonable frame, and then has reached the mesh of removal illumination interference 's.
Driver gestures three-dimensional reconstruction:
(1) driver's bone information is obtained using OpenPose
The open source projects OpenPose of Carnegie Mellon University is taken the photograph using 500 be furnished with above a dome structure As head, body gesture shooting is carried out to experimenter from all angles, gets a large amount of experiment collection data.It is not using tracking Technology, but by the technological means of CNN (convolutional neural networks) and PAF (part compatibility field), by final detection As a result the overall skeleton of a people is merged into, this method helps the mankind and robot accurately to understand the environment of surrounding, and is Interaction between the mankind and machine opens new approach, and this system is that this method is utilized, and has got driver's Bone information.
(2) camera imaging model and calibration
The imaging process of video camera is exactly the process that 3D scenery projects to 2D imaging plane, this process is retouched with image transformation It states, common image transformation is Perspective transformation model in computer vision, as shown in Figure 5.According to the available figure of projection model Geometrical relationship between shape and practical scenery.This determining video camera inner geometry and optical characteristics (intrinsic parameter) and opposite generation The method of the three-dimensional position of the camera coordinate system of boundary's coordinate system and direction (outer parameter) is known as camera calibration.
As shown in figure 5, a point A (X of real worlda,Ya,Za) after through the transformation of one 3 × 4 projection matrix The coordinate of image coordinate system is (Ua,Va):
(u, v, 1)~P (X, Y, Z, 1)T (16)
Wherein, matrix P can be broken down into three matrixes: Intrinsic Matrix, include five intrinsic parameter (X-direction focal length fx, Y-direction focal length fy, Z-direction focal length fz, projection centre coordinate (cx,cy) and inclination S);Spin matrix includes 3 external parameters (around the rotation angle roll of Z axis, around the rotation angle pitch of X-axis, around the rotation angle yaw of Y-axis);Translation matrix, comprising another Outer three external parameters are (along the displacement t of X-axisx, along the displacement t of Y-axisy, along the displacement t of Z axisz).Between parameters Relationship is as follows:
P=K [R | t] (17)
WhereinR=Rz·Rx·Ry,
Some intrinsic parameter (roll, pitch, yaw, fx、fy) initial value can be calculated to obtain by vanishing point, and S can To be defaulted as 0.Usually origin is established on ground level and the vertical line passes through video camera, such txAnd tzIt can be set as 0, tyIt is assumed to the approximation of known camera heights.The basis that these initial values are established is that video camera principal point is vertical in center, unit Horizontal ratio and zero inclination, the first two may not be able to guarantee to meet under normal circumstances.In order to relax constraint condition, we are used EDA algorithm reduces the re-projection mistake of subpoint.Each camera parameters can be found in this way in its corresponding scope of initial values Interior local optimum.
Vanishing point that the method for this camera calibration is located at horizontal plane it is only necessary to two and camera height it is close Like value as input.Carrying out camera calibration, it is only necessary to two pairs of orthogonal parallel lines, can select from compartment in the method Or a scaling board as shown in Figure 6 is hung up in compartment.
Skeleton pattern:
The foundation of skeleton pattern is the basis of human motion analysis.It is each that one good skeleton pattern can not only characterize human body Size, shape, the angle of a body part, and can reflect the topological relation between limbs.For driving in the method Member establishes skeleton model as shown in Figure 7 above the waist, and wherein each artis of upper half of human body skeleton pattern shares 18 freedom degrees, Including global translation (3), global rotation (3), head (3), shoulder joint (each 3 of left and right), elbow joint (left and right each 1 It is a), abdomen (1).
Skeleton pattern coordinate calculates
The coordinate of each artis calculates the conversion that use between following coordinate system: image coordinate system I in skeleton pattern, closes Save local coordinate system L, target global coordinate system H, camera coordinate system C, world coordinate system W.Wherein camera coordinate system and generation Relationship between boundary's coordinate system is determined by camera calibration.Target global coordinate system is built upon the coordinate on skeleton pattern System.Human body is other than whole movement, and there are also each limbs around a certain range of rotation of corresponding joint progress, therefore At each body part, using a certain artis as origin and local coordinate system is established respectively.The method is indicated with Eulerian angles The rotation of body part can find certain point in mesh by a series of conversion of child nodes and father node local coordinate system in this way Mark the coordinate in global coordinate system.Then this can be found in world coordinate system by the rotation and translation of the human body overall situation Position.
In the skeleton pattern that Eulerian angles indicate, the coordinate transform of child node c and father node f be can be described as:
Mcf=T (tx,ty,tz)Rz(γ)Ry(β)Rx(α) (21)
Wherein T (tx,ty,tz) it is translation matrix of the child node relative to father node,
Rx(α)、Ry(β)、Rz(λ) is child node coordinate system respectively around father node coordinate system X, Y, the spin moment of Z-direction Battle array:
Assuming that coordinate of a certain artis in corresponding child node local coordinate system is p=(x, y, z, 1), then basis Coordinate transform formula described above, the coordinate P=[X, Y, Z, 1] in father node coordinate system can be expressed as form:
After having obtained the variation relation between every layer of coordinate system child node, father node, the grade of multilayer transformation can be passed through Connection, calculates position of a certain skeleton point of human body in global coordinate system, and then find out position of this in world coordinate system.
According to above method, in available driver's video, driver each artis at each moment coordinate, To lay a good foundation to driver's progress Activity recognition and semantic analysis later.

Claims (10)

1. a kind of train driver detection method based on bone detection and three-dimensional reconstruction, it is characterised in that include the following steps:
Get the original video sequence of train driver;
The interference that illumination is eliminated to the train driver original video sequence got, obtains clearly video sequence;
According to pretreated video sequence, driver's two dimension bone information is obtained, is then obtained using camera marking method The intrinsic parameter of monitoring device and outer parameter, and driven according to the intrinsic parameter of video camera and outer parameter and human skeleton model The three-dimensional coordinate of each artis of member, exports driver's three-dimensional bone video sequence.
2. the train driver detection method as described in claim 1 based on bone detection and three-dimensional reconstruction, it is characterised in that: Train driver is got in the monitoring device onboard from the monitoring device or driver installed in train driving room Original video sequence.
3. the train driver detection method as described in claim 1 based on bone detection and three-dimensional reconstruction, it is characterised in that: The train driver original video sequence got is obtained clearly using the interference that color space moving method eliminates illumination Video sequence.
4. the train driver detection method as claimed in claim 3 based on bone detection and three-dimensional reconstruction, which is characterized in that The color space moving method includes the following steps:
It is transformed into LMS color space by rgb color space,
L α β color space is transformed by LMS color space later,
In l α β color space, each color channel is substantially independent, can carry out independent fortune to three channel images respectively It calculates, the information without modifying other two channel.
5. the train driver detection method as claimed in claim 4 based on bone detection and three-dimensional reconstruction, it is characterised in that: The extraction position that the normal frame of illumination regards color space is chosen, intensity of illumination is very low when driving in tunnel by train in video The color space that is extracted before of frame be replaced;Similarly, by the position of highlight area in special frames, with reasonable frame Corresponding color space replacement.
6. the train driver detection method as described in claim 1 based on bone detection and three-dimensional reconstruction, it is characterised in that: The two-dimentional bone information of driver is obtained using OpenPose detection method.
7. the train driver detection method as described in claim 1 based on bone detection and three-dimensional reconstruction, which is characterized in that The scaling method of the video camera is as follows:
One point A (X of real worlda,Ya,Za) after through the transformation of one 3 × 4 projection matrix in the seat of image coordinate system It is designated as (Ua,Va):
(u, v, 1)~P (X, Y, Z, 1)T (3)
Wherein, matrix P can be broken down into three matrixes: Intrinsic Matrix, include five intrinsic parameters, be respectively as follows: X-direction focal length fx, Y-direction focal length fy, Z-direction focal length fz, projection centre coordinate (cx,cy) and inclination S;Spin matrix includes 3 external parameters, It is respectively as follows: the rotation angle roll around Z axis, around the rotation angle pitch of X-axis, around the rotation angle yaw of Y-axis;Translation matrix, Comprising the other three external parameter, it is respectively as follows: the displacement t along X-axisx, along the displacement t of Y-axisy, along the displacement t of Z axisz;Respectively Relationship between a parameter is as follows:
P=K [R | t] (4)
WhereinR=Rz·Rx·Ry,
Described intrinsic parameter roll, pitch, yaw, fx、fyInitial value calculated to obtain by vanishing point, and S is defaulted as 0;It will be former Point is established on ground level and the vertical line passes through video camera, the t described in this wayxAnd tzIt is set to 0, tyIt is assumed to known camera The approximation of height.
8. the train driver detection method as claimed in claim 7 based on bone detection and three-dimensional reconstruction, which is characterized in that Two pairs of orthogonal parallel lines are selected from compartment when demarcating to the video camera or one such as subscript are hung up in compartment Fixed board.
9. the train driver detection method as described in claim 1 based on bone detection and three-dimensional reconstruction, it is characterised in that: Each artis of upper half of human body skeleton pattern shares 18 freedom degrees, including global 3 freedom degrees of translation, global rotation 3 A freedom degree, head 3 freedom degrees, shoulder joint or so each 3 freedom degrees, elbow joints or so each 1 freedom degree and abdomen 1 are certainly By spending.
10. the train driver detection method as described in claim 1 based on bone detection and three-dimensional reconstruction, feature exist In skeleton pattern Coordinate calculation method is as follows:
The coordinate of each artis calculates the conversion that use between following coordinate system: image coordinate system I, joint office in skeleton pattern Portion coordinate system L, target global coordinate system H, camera coordinate system C, world coordinate system W.Wherein camera coordinate system C and the world are sat Relationship between mark system W is determined by camera calibration.Target global coordinate system H is built upon the coordinate on skeleton pattern System;Human body is other than whole movement, and there are also each limbs around a certain range of rotation of corresponding joint progress, therefore At each body part, using a certain artis as origin and local coordinate system L is established respectively;Body part is indicated with Eulerian angles Rotation, Eulerian angles indicate skeleton pattern in, the coordinate transform formula of child node c and father node f can be described as:
Mcf=T (tx,ty,tz)Rz(γ)Ry(β)Rx(α) (8)
Wherein T (tx,ty,tz) it is translation matrix of the child node relative to father node,
Rx(α)、Ry(β)、Rz(λ) is child node coordinate system respectively around father node coordinate system X, Y, the spin matrix of Z-direction:
Assuming that coordinate of a certain artis in corresponding child node local coordinate system is p=(x, y, z, 1), then according to coordinate Transformation for mula (8), the coordinate P=[X, Y, Z, 1] in father node coordinate system can be expressed as form:
After having obtained the variation relation between every layer of coordinate system child node, father node, the cascade converted by multilayer is calculated Position of a certain skeleton point of human body in global coordinate system, and then position of this in world coordinate system is found out, it is driven In member's video, coordinate of the driver in each artis at each moment.
CN201811330010.6A 2018-11-09 2018-11-09 Train driver detection method based on bone detection and three-dimensional reconstruction Pending CN109543576A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811330010.6A CN109543576A (en) 2018-11-09 2018-11-09 Train driver detection method based on bone detection and three-dimensional reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811330010.6A CN109543576A (en) 2018-11-09 2018-11-09 Train driver detection method based on bone detection and three-dimensional reconstruction

Publications (1)

Publication Number Publication Date
CN109543576A true CN109543576A (en) 2019-03-29

Family

ID=65846445

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811330010.6A Pending CN109543576A (en) 2018-11-09 2018-11-09 Train driver detection method based on bone detection and three-dimensional reconstruction

Country Status (1)

Country Link
CN (1) CN109543576A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197156A (en) * 2019-05-30 2019-09-03 清华大学 Manpower movement and the shape similarity metric method and device of single image based on deep learning
CN110706230A (en) * 2019-10-29 2020-01-17 国网黑龙江省电力有限公司电力科学研究院 Tower abnormity automatic detection method based on prior information
CN110930482A (en) * 2019-11-14 2020-03-27 北京达佳互联信息技术有限公司 Human hand bone parameter determination method and device, electronic equipment and storage medium
CN111462233A (en) * 2020-03-20 2020-07-28 武汉理工大学 Recovery data processing method and system for ship cab and storage medium
CN111860157A (en) * 2020-06-15 2020-10-30 北京体育大学 Motion analysis method, device, equipment and storage medium
CN111914807A (en) * 2020-08-18 2020-11-10 太原理工大学 Miner behavior identification method based on sensor and skeleton information

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101075351A (en) * 2006-09-14 2007-11-21 浙江大学 Method for restoring human-body videothree-dimensional movement based on sided shadow and end node
CN106503684A (en) * 2016-10-28 2017-03-15 厦门中控生物识别信息技术有限公司 A kind of face image processing process and device
CN107187467A (en) * 2017-05-27 2017-09-22 中南大学 Driver's monitoring method and system for operation safety and accident imputation
CN107945269A (en) * 2017-12-26 2018-04-20 清华大学 Complicated dynamic human body object three-dimensional rebuilding method and system based on multi-view point video
CN108038453A (en) * 2017-12-15 2018-05-15 罗派智能控制技术(上海)有限公司 A kind of driver's state-detection and identifying system based on RGBD

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101075351A (en) * 2006-09-14 2007-11-21 浙江大学 Method for restoring human-body videothree-dimensional movement based on sided shadow and end node
CN106503684A (en) * 2016-10-28 2017-03-15 厦门中控生物识别信息技术有限公司 A kind of face image processing process and device
CN107187467A (en) * 2017-05-27 2017-09-22 中南大学 Driver's monitoring method and system for operation safety and accident imputation
CN108038453A (en) * 2017-12-15 2018-05-15 罗派智能控制技术(上海)有限公司 A kind of driver's state-detection and identifying system based on RGBD
CN107945269A (en) * 2017-12-26 2018-04-20 清华大学 Complicated dynamic human body object three-dimensional rebuilding method and system based on multi-view point video

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHE CAO 等: "Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
史新卓: "单目图像的上半身人体三维姿态重构研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
钟高锋: "图像色彩迁移算法的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑(月刊)》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197156A (en) * 2019-05-30 2019-09-03 清华大学 Manpower movement and the shape similarity metric method and device of single image based on deep learning
CN110706230A (en) * 2019-10-29 2020-01-17 国网黑龙江省电力有限公司电力科学研究院 Tower abnormity automatic detection method based on prior information
CN110930482A (en) * 2019-11-14 2020-03-27 北京达佳互联信息技术有限公司 Human hand bone parameter determination method and device, electronic equipment and storage medium
CN110930482B (en) * 2019-11-14 2023-10-31 北京达佳互联信息技术有限公司 Method and device for determining bone parameters of human hand, electronic equipment and storage medium
CN111462233A (en) * 2020-03-20 2020-07-28 武汉理工大学 Recovery data processing method and system for ship cab and storage medium
CN111462233B (en) * 2020-03-20 2024-02-13 武汉理工大学 Method, system and storage medium for processing restored data of ship cab
CN111860157A (en) * 2020-06-15 2020-10-30 北京体育大学 Motion analysis method, device, equipment and storage medium
CN111860157B (en) * 2020-06-15 2023-12-26 北京体育大学 Motion analysis method, device, equipment and storage medium
CN111914807A (en) * 2020-08-18 2020-11-10 太原理工大学 Miner behavior identification method based on sensor and skeleton information
CN111914807B (en) * 2020-08-18 2022-06-28 太原理工大学 Miner behavior identification method based on sensor and skeleton information

Similar Documents

Publication Publication Date Title
CN109543576A (en) Train driver detection method based on bone detection and three-dimensional reconstruction
CN107392964B (en) The indoor SLAM method combined based on indoor characteristic point and structure lines
CN110837778B (en) Traffic police command gesture recognition method based on skeleton joint point sequence
CN108830150B (en) One kind being based on 3 D human body Attitude estimation method and device
CN105378796B (en) Scalable volume 3D reconstruct
CN102855470B (en) Estimation method of human posture based on depth image
CN106485207B (en) A kind of Fingertip Detection and system based on binocular vision image
CN106997605B (en) A method of foot type video is acquired by smart phone and sensing data obtains three-dimensional foot type
CN107633267A (en) A kind of high iron catenary support meanss wrist-arm connecting piece fastener recognition detection method
CN107204010A (en) A kind of monocular image depth estimation method and system
CN108805906A (en) A kind of moving obstacle detection and localization method based on depth map
CN106780543A (en) A kind of double framework estimating depths and movement technique based on convolutional neural networks
CN109460267A (en) Mobile robot offline map saves and real-time method for relocating
CN110414546A (en) Use intermediate loss function training image signal processor
CN108801135A (en) Nuclear fuel rod pose automatic identification equipment
CN110910349B (en) Wind turbine state acquisition method based on aerial photography vision
CN104408760A (en) Binocular-vision-based high-precision virtual assembling system algorithm
CN116822100B (en) Digital twin modeling method and simulation test system thereof
US20240144515A1 (en) Weakly paired image style transfer method based on pose self-supervised generative adversarial network
CN109657634A (en) A kind of 3D gesture identification method and system based on depth convolutional neural networks
CN117274515A (en) Visual SLAM method and system based on ORB and NeRF mapping
CN115546061A (en) Three-dimensional point cloud model repairing method with shape perception
CN114036969A (en) 3D human body action recognition algorithm under multi-view condition
CN110516527B (en) Visual SLAM loop detection improvement method based on instance segmentation
CN118071873A (en) Dense Gaussian map reconstruction method and system in dynamic environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190329

RJ01 Rejection of invention patent application after publication