CN115240231A - Image recognition-based sitting posture detection and adjustment method for full-motion simulator - Google Patents

Image recognition-based sitting posture detection and adjustment method for full-motion simulator Download PDF

Info

Publication number
CN115240231A
CN115240231A CN202211154508.8A CN202211154508A CN115240231A CN 115240231 A CN115240231 A CN 115240231A CN 202211154508 A CN202211154508 A CN 202211154508A CN 115240231 A CN115240231 A CN 115240231A
Authority
CN
China
Prior art keywords
sitting posture
image
full
person
motion simulator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211154508.8A
Other languages
Chinese (zh)
Other versions
CN115240231B (en
Inventor
李剑华
杨磊
王兆祎
吴建荣
李德斌
边利建
李卫坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Xiangyi Aviation Technology Co Ltd
Original Assignee
Zhuhai Xiangyi Aviation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Xiangyi Aviation Technology Co Ltd filed Critical Zhuhai Xiangyi Aviation Technology Co Ltd
Priority to CN202211154508.8A priority Critical patent/CN115240231B/en
Publication of CN115240231A publication Critical patent/CN115240231A/en
Application granted granted Critical
Publication of CN115240231B publication Critical patent/CN115240231B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/06Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness for measuring thickness ; e.g. of sheet material
    • G01B11/0608Height gauges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • G09B9/02Simulators for teaching or training purposes for teaching control of vehicles or other craft
    • G09B9/08Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention belongs to the technical field of image recognition and computer vision, and particularly relates to a sitting posture detection and adjustment method, system and equipment for a full-motion simulator based on image recognition, aiming at solving the problems that the existing sitting posture detection and adjustment method for the full-motion simulator is low in detection precision and low in adjustment efficiency, and can cause some potential safety hazards in the experience process of the full-motion simulator. The method comprises the following steps: collecting a whole body image; resampling and preprocessing a whole body image; generating human body parameters; acquiring initial sitting posture adjustment data; when detecting that a new experiential person sits well, acquiring a sitting posture image and an in-cabin environment image, and performing resampling and preprocessing; obtaining a sitting posture detection result of the current experience personnel; extracting position parameters and relative movement parameters of each instrument; and acquiring a standard sitting posture, and finely adjusting the sitting posture of the current experience personnel. The invention improves the detection precision and efficiency of the sitting posture detection and adjustment method for the full-motion simulator, and reduces the potential safety hazard.

Description

Image recognition-based sitting posture detection and adjustment method for full-motion simulator
Technical Field
The invention belongs to the technical field of image recognition and computer vision, and particularly relates to a sitting posture detection and adjustment method, system and equipment for a full-motion simulator based on image recognition.
Background
Because the aircraft needs to do some high difficult movements in the flight process, such as roll, dive, etc., and the requirement on the physical quality of the pilot is high, before becoming an excellent pilot, a series of flight training is indispensable, so that the pilot adapts to the reaction generated by the body, and the resistance is increased. The real training has the problems of cost and safety, the number of times of provided training is limited, and the simulated training has the outstanding advantages of energy conservation, economy, safety, no limitation of fields and meteorological conditions, shortened training period, reduced training cost, improved training efficiency and the like. Therefore, the simulation training plays a very important role in the flight training.
The full-motion simulator is a testing and training device which simulates the flight state, flight environment and flight condition of an airplane when the airplane performs a flight task and provides similar control load, vision, hearing and motion feeling for a pilot. The cost is very expensive, and in the past, except for the flight crew, ordinary people have difficulty in experiencing and feeling.
In recent years, with the rapid development of civil aviation industry in China, in order to promote the propaganda of aviation knowledge, promote the interest of teenagers and adults in aerospace, cultivate aviation talents, and open the flight experience activities of full-motion simulators in part of training centers.
In the past flight experience activities, users need to manually adjust or electrically adjust sitting postures. Because everybody has many-sided difference because height, limbs proportion, weight etc. to sitting the appearance adaptation, if all manual or electronic go the regulation at every turn, just brought many troubles for the user, reduced the comfort level that the simulator was driven to the full motion and experienced, can cause some unnecessary time extravagant moreover, lead to experience process slowly, experience inefficiency. In addition, experience that the user sits incorrectly also causes certain safety hazards. Based on the method, the invention provides a sitting posture detection and adjustment method for a full-motion simulator based on image recognition.
Disclosure of Invention
The problems that the existing sitting posture detection and adjustment method for the full-motion simulator is low in detection precision and low in adjustment efficiency, and potential safety hazards are caused in the experience process of the full-motion simulator are solved. The invention provides a sitting posture detection and adjustment method for a full-motion simulator based on image recognition, which comprises the following steps:
s100, collecting a whole body image of a person at the head of a queue in a person queuing queue to be experienced by the full motion simulator as an input image; the whole body image comprises a set reference object;
s200, resampling the input image by a difference method, and preprocessing the resampled input image to obtain a preprocessed whole-body image;
s300, extracting a candidate region of a human body in the preprocessed whole-body image, inputting a pre-constructed key point detection network, acquiring human body joint point information, and generating human body parameters by combining the information of a reference object in the preprocessed whole-body image; the human body parameters comprise height, arm length, leg length, head length, shoulder width and crotch width;
s400, acquiring an upper body proportion as a first proportion through a pre-constructed body proportion calculation method based on the human body parameters; calculating a difference value between the first proportion and the upper body proportion of the first person stored in the database, and selecting historical sitting posture adjustment data of the first person corresponding to the minimum difference value as initial sitting posture adjustment data; the first person is a person with a height within a set range;
s500, when detecting that the experience personnel in the full-motion simulator leaves, adjusting the position according to the initial sitting posture adjustment data, and reminding the experience personnel to prepare boarding experience;
s600, when detecting that a new experiential person is in the process of fastening a safety belt, acquiring a sitting posture image of the current experiential person and an in-cabin environment image of a full-motion simulator cockpit, and performing resampling and preprocessing to obtain a preprocessed sitting posture image and a preprocessed in-cabin environment image;
s700, extracting a candidate area of the human body in the preprocessed sitting posture image, inputting a pre-constructed key point detection network, and acquiring human body joint points and corresponding coordinates thereof to further obtain a sitting posture detection result of the current experience personnel;
s800, extracting position parameters and relative movement parameters of each instrument and equipment based on the preprocessed cabin environment image; the position parameters of the instrument equipment comprise position information of an instrument panel, position information of a control lever and position information of a foot brake; the relative movement parameters comprise the relative distance of the current experience personnel capable of moving back and forth, left and right, up and down;
s900, based on the sitting posture detection result, combining the human body parameters, the position parameters of the instrument and the relative movement parameters, and obtaining a standard sitting posture through a pre-constructed sitting posture prediction model; and according to the standard sitting posture, finely adjusting the sitting posture of the current experience personnel.
In some preferred embodiments, the pre-processing includes shadow removal, geometric correction, image enhancement, gaussian blur smoothing;
the image enhancement method is affine transformation, and comprises the following steps:
Figure DEST_PATH_IMAGE001
wherein,
Figure 997593DEST_PATH_IMAGE002
representing the image before the affine transformation,
Figure DEST_PATH_IMAGE003
representing the image after the affine transformation,
Figure 829283DEST_PATH_IMAGE004
a matrix representing an affine transformation of the image,
Figure DEST_PATH_IMAGE005
Figure 581818DEST_PATH_IMAGE006
a matrix of rotations is represented, which is,
Figure DEST_PATH_IMAGE007
indicating the set translation amount.
In some preferred embodiments, the key point detection network is constructed based on a convolution processing module, a feature extractor, a feature processing module, a residual error processing module and a discrimination module which are connected in sequence;
the convolution processing module is configured to extract convolution characteristics of the candidate area through a VGG16 network, take the output of the 10 th layer of the convolution layer in the VGG16 network as a first characteristic, and take the output of the 13 th layer of the convolution layer in the VGG16 network as a second characteristic;
the feature extractor is configured to fuse the first feature and the second feature, and perform convolution and Sigmoid operation after the fusion to obtain a third feature;
the characteristic processing module is a U-Net network, and the input of the characteristic processing module is a third characteristic; features of the same scale of the encoder and the decoder in the U-Net network are subjected to jumping connection, and the features output by the decoder are converted into one-dimensional features;
the Residual error processing module is constructed based on a Residual Network; the input of the Residual Network is a one-dimensional characteristic output and converted by a U-Net Network decoder, and the one-dimensional characteristic is configured to obtain a thermodynamic diagram of a corresponding joint point;
the judging module is a CRF model, and the input of the CRF model is the output of a Residual Network; and the CRF model is used for acquiring the probability distribution of the occurrence of each joint point.
In some preferred embodiments, the loss function of the key point detection network during training is:
Figure 145655DEST_PATH_IMAGE008
wherein,
Figure DEST_PATH_IMAGE009
representing the corresponding loss function of the key point detection network,
Figure 470457DEST_PATH_IMAGE010
indicating the number of encoders in a U-Net network
Figure DEST_PATH_IMAGE011
The characteristics of the input of each convolution block,
Figure 981204DEST_PATH_IMAGE012
indicating the second in the decoder in the U-Net network
Figure 402958DEST_PATH_IMAGE011
The characteristics of the output of each convolution block,
Figure DEST_PATH_IMAGE013
representing the number of convolutional blocks in the U-Net network encoder,
Figure 901810DEST_PATH_IMAGE014
Figure DEST_PATH_IMAGE015
a regularization parameter is represented as a function of,
Figure 651591DEST_PATH_IMAGE016
a cost function representing the likelihood estimation calculation, i.e. a loss function corresponding to the CRF model,
Figure DEST_PATH_IMAGE017
Figure 28346DEST_PATH_IMAGE018
representing the 1 st convolutional block, the 1 st convolutional block in the encoder in the U-Net network
Figure 570186DEST_PATH_IMAGE013
The characteristics of the input of each convolution block,
Figure DEST_PATH_IMAGE019
and (3) representing the characteristics of the output of the 1 st convolution block in the decoder in the U-Net network.
In some preferred embodiments, the upper body proportion is obtained by a pre-constructed body proportion calculation method based on the human body parameters, and the method comprises the following steps:
Figure 944667DEST_PATH_IMAGE020
wherein,
Figure DEST_PATH_IMAGE021
the scale of the upper body is shown,
Figure 306378DEST_PATH_IMAGE022
which is indicative of the height of the person to be experienced,
Figure DEST_PATH_IMAGE023
the length of the leg is indicated as,
Figure 722709DEST_PATH_IMAGE024
the length of the arm is shown as,
Figure DEST_PATH_IMAGE025
the length of the head is shown,
Figure 728842DEST_PATH_IMAGE026
which indicates the width of the shoulder,
Figure DEST_PATH_IMAGE027
the width of the crotch is indicated by the width of the crotch,
Figure 70962DEST_PATH_IMAGE028
Figure DEST_PATH_IMAGE029
Figure 919969DEST_PATH_IMAGE030
Figure DEST_PATH_IMAGE031
representing preset weights.
In some preferred embodiments, the sitting posture prediction model is constructed based on a three-way convolutional neural network, a machine learning model;
the three-path convolutional neural network inputs a sitting posture detection result corresponding to the upper half body, a sitting posture detection result corresponding to the lower half body and a sitting posture detection result corresponding to the whole body of the current experience personnel; the three-path convolution neural network is configured to encode input respectively to obtain a first convolution characteristic, a second convolution characteristic and a third convolution characteristic; splicing the first convolution characteristic and the second convolution characteristic, and fusing the spliced first convolution characteristic and the second convolution characteristic with the third convolution characteristic to obtain a fourth convolution characteristic; decoding and Relu activating the fourth convolution characteristic in sequence;
the machine learning model is a support vector machine regression model and is used for carrying out support vector regression operation based on fourth convolution characteristics after Relu activation processing and in combination with the human body parameters, the position parameters of the instrument and the relative movement parameters to obtain a standard sitting posture.
In some preferred embodiments, the sitting posture detection and adjustment method for the full-motion simulator based on image recognition further comprises the step of judging whether to perform sitting posture fine adjustment on the current experience personnel according to set conditions: the setting conditions are as follows: whether the movement speed of the current full-motion simulator is zero or not; whether the current full-motion simulator adopts braking measures or not; whether the safety belt of an experiential person of the current full-motion simulator is in a fastening state or not;
and if and only if the three conditions are all met, carrying out sitting posture fine adjustment on the current experience personnel.
In a second aspect of the present invention, a system for detecting and adjusting a sitting posture of a fully-automatic simulator based on image recognition is provided, including: the system comprises image acquisition equipment, a remote server and sitting posture adjusting equipment; the image acquisition equipment, the remote server and the sitting posture adjusting equipment are in communication connection
The image acquisition equipment is configured to acquire a whole-body image of a person at the head of a queue in a person queuing queue to be experienced by the full-motion simulator as an input image; the whole-body image comprises a set reference object; the system is also configured to acquire a sitting posture image of the current experience personnel and an in-cabin environment image of the full-motion simulator cockpit when detecting that the new experience personnel are in the process of fastening the safety belt;
the remote server includes: the system comprises a first preprocessing module, a second preprocessing module, a joint point extraction module, an initial sitting posture acquisition module, a sitting posture detection module, a parameter extraction module and a standard sitting posture acquisition module;
the first preprocessing module is configured to resample the input image by a difference method and preprocess the resampled input image to obtain a preprocessed whole body image;
the joint point extraction module is configured to extract a candidate region of a human body in the preprocessed whole-body image, input a pre-constructed key point detection network, acquire human body joint point information, and generate human body parameters by combining information of a reference object in the preprocessed whole-body image; the human body parameters comprise height, arm length, leg length, head length, shoulder width and crotch width;
the initial sitting posture acquisition module is configured to acquire an upper body proportion as a first proportion through a pre-constructed body proportion calculation method based on the human body parameters; calculating a difference value between the first proportion and the upper body proportion of the first person stored in the database, and selecting historical sitting posture adjustment data of the first person corresponding to the minimum difference value as initial sitting posture adjustment data; the first person is a person with the height within a set range and sends the person to sitting posture adjusting equipment of the full-motion simulator;
the second preprocessing module is configured to resample and preprocess the sitting posture image and the cabin interior environment image to obtain a preprocessed sitting posture image and a preprocessed cabin interior environment image;
the sitting posture detection module is configured to extract a candidate area of a human body in the preprocessed sitting posture image, input a pre-constructed key point detection network, acquire a human body joint point and a corresponding coordinate thereof, and further obtain a sitting posture detection result of the current experience personnel;
the parameter extraction module is configured to extract position parameters and relative movement parameters of each instrument and equipment based on the preprocessed cabin environment image; the position parameters of the instrument equipment comprise position information of an instrument panel, position information of a control lever and position information of a foot brake; the relative movement parameters comprise the relative distance of the current experience personnel capable of moving back and forth, left and right, up and down;
the standard sitting posture acquisition module is configured to acquire a standard sitting posture through a pre-constructed sitting posture prediction model based on the sitting posture detection result and in combination with the human body parameter, the position parameter of the instrument and the relative movement parameter, and send the standard sitting posture to sitting posture adjustment equipment;
the sitting posture adjusting equipment is configured to adjust the position according to the initial sitting posture adjusting data and remind people to be experienced of preparing boarding experience after detecting that the experience people in the full-motion simulator leave; and the sitting posture adjusting device is also configured to perform sitting posture fine adjustment on the current experience personnel according to the standard sitting posture.
In a third aspect of the present invention, an electronic device is provided, including: at least one processor; and a memory communicatively coupled to at least one of the processors; the storage stores instructions which can be executed by the processor, and the instructions are used for being executed by the processor to realize the sitting posture detection and adjustment method for the full-motion simulator based on the image recognition.
In a fourth aspect of the present invention, a computer-readable storage medium is provided, where computer instructions are stored, and the computer instructions are used for being executed by a computer to implement the sitting posture detecting and adjusting method for a full-motion simulator based on image recognition.
The invention has the beneficial effects that:
the invention improves the detection precision and efficiency of the sitting posture detection and adjustment method for the full-motion simulator, reduces the potential safety hazard and improves the user experience.
1) According to the invention, the upper body proportion of the person to be experienced is firstly obtained, the upper body proportion is matched, the historical sitting posture adjustment data corresponding to the person with the upper body proportion close to the history is obtained, and the position is coarsely adjusted before the person to be experienced does not step on the full-motion simulator. After the person to be experienced is boarding and sitting well, the sitting posture of the current person to be experienced is adjusted again according to the sitting posture image and the cabin environment image. The secondary adjustment of the sitting posture of the experience personnel is realized, the adjustment time is saved, the adjustment precision is improved, and the experience degree of the user is greatly improved.
2) The method extracts the characteristics of the candidate regions of the human body based on the 10 th convolution layer and the 13 th convolution layer of the VGG16 Network, sequentially inputs the characteristics into the U-Net Network, the Residual Network and the CRF model after splicing to construct the key point detection Network to predict the joint points of the human body, improves the prediction accuracy and reduces the Network complexity.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings.
FIG. 1 is a schematic flow chart of a sitting posture detecting and adjusting method for a full-motion simulator based on image recognition according to an embodiment of the present invention;
FIG. 2 is a block diagram of a system for a sitting posture detection and adjustment method for a fully-automatic simulator based on image recognition according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a key point detection network architecture according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer system of an electronic device suitable for implementing the embodiments of the present application according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
The invention discloses a sitting posture detection and adjustment method for a full-motion simulator based on image recognition, which comprises the following steps as shown in figure 1:
s100, collecting a whole body image of a person at the head of a queue in a person queuing queue to be experienced by the full motion simulator as an input image; the whole-body image comprises a set reference object;
s200, resampling the input image by a difference method, and preprocessing the resampled input image to obtain a preprocessed whole body image;
s300, extracting a candidate region of a human body in the preprocessed whole-body image, inputting a pre-constructed key point detection network, acquiring human body joint point information, and generating human body parameters by combining the information of a reference object in the preprocessed whole-body image; the human body parameters comprise height, arm length, leg length, head length, shoulder width and crotch width;
s400, acquiring an upper body proportion as a first proportion through a pre-constructed body proportion calculation method based on the human body parameters; calculating a difference value between the first proportion and the upper body proportion of the first person stored in the database, and selecting historical sitting posture adjustment data of the first person corresponding to the minimum difference value as initial sitting posture adjustment data; the first person is a person with a height within a set range;
s500, when detecting that the experience personnel in the full-motion simulator leaves, adjusting the position according to the initial sitting posture adjustment data, and reminding the experience personnel to prepare boarding experience;
s600, when detecting that a new experience person is in the process of fastening a safety belt, acquiring a sitting posture image of the current experience person and an in-cabin environment image of a full-motion simulator cockpit, and performing resampling and preprocessing to obtain a preprocessed sitting posture image and a preprocessed in-cabin environment image;
s700, extracting a candidate area of the human body in the preprocessed sitting posture image, inputting a pre-constructed key point detection network, and acquiring human body joint points and corresponding coordinates thereof to further obtain a sitting posture detection result of the current experience personnel;
s800, extracting position parameters and relative movement parameters of each instrument and equipment based on the preprocessed cabin environment image; the position parameters of the instrument equipment comprise position information of an instrument panel, position information of a control lever and position information of a foot brake; the relative movement parameters comprise the relative distance of the current experience personnel capable of moving back and forth, left and right and up and down;
s900, acquiring a standard sitting posture through a pre-constructed sitting posture prediction model based on the sitting posture detection result and in combination with the human body parameters, the position parameters of the instrument and the relative movement parameters; and carrying out sitting posture fine adjustment on the current experience personnel according to the standard sitting posture.
In order to more clearly describe the system of the sitting posture detecting and adjusting method for a fully-automatic simulator based on image recognition, the following describes in detail the steps of an embodiment of the method of the present invention with reference to the drawings.
S100, collecting a whole body image of a person at the head of a queue in a person queuing queue to be experienced by the full motion simulator as an input image; the whole body image comprises a set reference object;
in this embodiment, the image acquisition device externally arranged on the full motion simulator is used to acquire the whole body image of the person at the head of the queue in the queue of the person to be experienced by the full motion simulator, and the whole body image is used as the input image. In addition, in order to acquire the human body parameters of the person to be experienced in the later period, a reference object needs to be acquired during acquisition. In the present invention, the reference object is a reference object with a set height.
S200, resampling the input image by a difference method, and preprocessing the resampled input image to obtain a preprocessed whole body image;
in the present embodiment, the input image is preferably resampled by a B-spline difference method, so that the voxels of the input image remain isotropic, and the preset size is preferably set to 1 mm × 1 mm, which may be set according to actual requirements in other embodiments.
The preprocessing comprises shadow removal, geometric correction, image enhancement and Gaussian blur smoothing; the image enhancement method is affine transformation, and the method comprises the following steps:
Figure 372947DEST_PATH_IMAGE001
(1)
wherein,
Figure 263281DEST_PATH_IMAGE002
representing the image before the affine transformation,
Figure 838618DEST_PATH_IMAGE003
represents the image after the affine transformation,
Figure 581446DEST_PATH_IMAGE004
a matrix representing an affine transformation is represented,
Figure 166012DEST_PATH_IMAGE005
Figure 740212DEST_PATH_IMAGE006
a matrix of rotations is represented, which is,
Figure 627397DEST_PATH_IMAGE007
indicating the set amount of translation.
S300, extracting a candidate region of a human body in the preprocessed whole-body image, inputting a pre-constructed key point detection network, acquiring human body joint point information, and generating human body parameters by combining the information of a reference object in the preprocessed whole-body image; the human body parameters comprise height, arm length, leg length, head length, shoulder width and crotch width;
the invention describes the skeletal information of the human body by detecting some key points of the human body, such as joints, five sense organs and the like. The MS COCO dataset is a multi-person human body key point detection dataset, and has 17 key point numbers (respectively, nose, left eye, right eye, left ear, right ear, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist, left hip, right hip, left knee, right knee, left ankle, and right ankle), and more than 30 ten thousand image samples, which is also the most commonly used dataset in the current related research.
In this embodiment, the key point detection network is constructed based on a convolution processing module, a feature extractor, a feature processing module, a residual error processing module, and a discrimination module, which are connected in sequence, as shown in fig. 3;
the convolution processing module is configured to extract convolution characteristics of the candidate area through the VGG16 network, take the output of the 10 th layer of the convolution layer in the VGG16 network as first characteristics, and take the output of the 13 th layer of the convolution layer in the VGG16 network as second characteristics;
the feature extractor is configured to fuse the first feature and the second feature, and perform convolution and Sigmoid operation after fusion to obtain a third feature;
the characteristic processing module is a U-Net network, and the input of the characteristic processing module is a third characteristic; features of the same scale of the encoder and the decoder in the U-Net network are subjected to jump connection, and the features output by the decoder are converted into one-dimensional features;
the Residual error processing module is constructed based on a Residual Network; the input of the Residual Network is a one-dimensional characteristic output and converted by a U-Net Network decoder, and the one-dimensional characteristic is configured to obtain a thermodynamic diagram of a corresponding joint point;
the judging module is a CRF model, and the input of the CRF model is the output of a Residual Network; and the CRF model is used for acquiring the probability distribution of the occurrence of each joint point.
In addition, the loss function of the key point detection network during training is as follows:
Figure 857521DEST_PATH_IMAGE008
(2)
wherein,
Figure 245777DEST_PATH_IMAGE009
representing the corresponding loss function of the key point detection network,
Figure 848053DEST_PATH_IMAGE010
indicating the number of encoders in a U-Net network
Figure 296352DEST_PATH_IMAGE011
The characteristics of the input of each convolution block,
Figure 76089DEST_PATH_IMAGE012
indicating the order of a decoder in a U-Net network
Figure 877823DEST_PATH_IMAGE011
The characteristics of the output of each convolution block,
Figure 223354DEST_PATH_IMAGE013
representing the number of convolutional blocks in a U-Net network encoder (in the present invention)
Figure 717920DEST_PATH_IMAGE013
Set to 4, i.e. the volume blocks 1, 2, 3, 4 in figure 3),
Figure 719374DEST_PATH_IMAGE014
Figure 449433DEST_PATH_IMAGE015
a regularization parameter is represented as a function of,
Figure 524836DEST_PATH_IMAGE017
Figure 314938DEST_PATH_IMAGE018
representing the 1 st convolutional block, the 1 st convolutional block in the encoder in the U-Net network
Figure 803688DEST_PATH_IMAGE013
The characteristics of the input of each convolution block,
Figure 445760DEST_PATH_IMAGE019
represents the characteristics of the output of the 1 st convolution block in the decoder in the U-Net network,
Figure 765882DEST_PATH_IMAGE016
representing the cost function of the likelihood estimation calculation, i.e. the corresponding loss function of the CRF model.
Figure 930148DEST_PATH_IMAGE032
(3)
Wherein,
Figure DEST_PATH_IMAGE033
a real label corresponding to the human body joint point information is represented,
Figure 843877DEST_PATH_IMAGE034
and representing the prediction result corresponding to the human body joint point information.
S400, acquiring an upper body proportion as a first proportion through a pre-constructed body proportion calculation method based on the human body parameters; calculating a difference value between the first proportion and the upper body proportion of the first person stored in the database, and selecting historical sitting posture adjustment data of the first person corresponding to the minimum difference value as initial sitting posture adjustment data; the first person is a person with a height within a set range;
from the statistical data in the prior art, it can be seen that the length of the upper half of a person having different heights (except for a person having a particularly different height, for example, a person having a difference of about 20-30 cm) is relatively close to that of the upper half of a person having a different height. Based on this, in this embodiment, the upper body proportion of the person to be experienced is obtained first, that is, the upper body proportion is matched. And acquiring historical sitting posture adjustment data corresponding to people with similar upper body proportions in history, and performing coarse adjustment on the position before the person to be experienced does not board the full-motion simulator. In order to further ensure the precision of coarse adjustment, when the upper body proportion is matched, the height range is limited, the height range is preferably set to be about 3cm of the height of a person to be experienced, for example, if the person to be experienced is 170cm, the upper body proportion of a person with the height [167cm,173cm ] is selected from a database (namely, a database storing parameters such as the height, the weight, the age, the upper body proportion and the like of the person) to be experienced to be matched with the person to be experienced, and in other embodiments, the set height range can be reduced if the height range is more precise.
The upper body proportion is calculated by the following method:
Figure 56684DEST_PATH_IMAGE020
(4)
wherein,
Figure 231313DEST_PATH_IMAGE021
the scale of the upper body is shown,
Figure 300900DEST_PATH_IMAGE022
which represents the height of the person to be experienced,
Figure 436346DEST_PATH_IMAGE023
the length of the leg is indicated as,
Figure 311898DEST_PATH_IMAGE024
the length of the arm is shown as,
Figure 544297DEST_PATH_IMAGE025
the length of the head is shown,
Figure 958354DEST_PATH_IMAGE026
which represents the width of the shoulder,
Figure 971309DEST_PATH_IMAGE027
the width of the crotch is indicated,
Figure 260339DEST_PATH_IMAGE028
Figure 347244DEST_PATH_IMAGE029
Figure 820951DEST_PATH_IMAGE030
Figure 930989DEST_PATH_IMAGE031
representing preset weights.
S500, when detecting that the experience personnel in the full-motion simulator leaves, adjusting the data adjusting position according to the initial sitting posture, and reminding the personnel to be experienced to prepare boarding experience;
in this embodiment, after the previous experience person leaves, before the next experience person boards the airplane, the position is roughly adjusted in advance according to the initial sitting posture adjustment data, so that the user experience is improved, and the time is saved.
S600, when detecting that a new experiential person is in the process of fastening a safety belt, acquiring a sitting posture image of the current experiential person and an in-cabin environment image of a full-motion simulator cockpit, and performing resampling and preprocessing to obtain a preprocessed sitting posture image and a preprocessed in-cabin environment image;
in this embodiment, when detecting that a new experiential person is in the process of fastening a safety belt, the sitting posture image of the current experiential person and the cabin environment image of the cockpit of the full-motion simulator are acquired through the built-in image acquisition equipment of the full-motion simulator. In the process of fastening the safety belt, the acquiescence is that experience personnel can not remove (here gathers the image at the in-process that experience personnel fasten the safety belt, can realize the synchronization that people's activity and standard position of sitting calculated, has saved the time greatly, and then promotes user's experience and feels), in other embodiments, can set up the time that position of sitting image, cabin interior environment image gathered according to actual conditions.
After the acquisition, the sitting posture image and the cabin environment image are resampled and preprocessed according to the resampling and preprocessing method in the step S200, so that the preprocessed sitting posture image and the preprocessed cabin environment image are obtained.
S700, extracting a candidate area of the human body in the preprocessed sitting posture image, inputting a pre-constructed key point detection network, and acquiring human body joint points and corresponding coordinates thereof to further obtain a sitting posture detection result of the current experience personnel;
in this embodiment, based on the candidate area of the human body in the preprocessed sitting posture image, the human body joint points and the corresponding coordinates thereof are obtained through the key point detection network constructed in S300, and then the sitting posture detection result of the current experience person is obtained.
S800, extracting position parameters and relative movement parameters of each instrument and equipment based on the preprocessed cabin environment image; the position parameters of the instrument equipment comprise position information of an instrument panel, position information of a control lever and position information of a foot brake; the relative movement parameters comprise the relative distance of the current experience personnel capable of moving back and forth, left and right, up and down;
s900, acquiring a standard sitting posture through a pre-constructed sitting posture prediction model based on the sitting posture detection result and in combination with the human body parameters, the position parameters of the instrument and the relative movement parameters; and carrying out sitting posture fine adjustment on the current experience personnel according to the standard sitting posture.
In this embodiment, the sitting posture prediction model is constructed based on a three-way convolutional neural network and a machine learning model;
the three-path convolutional neural network inputs a sitting posture detection result corresponding to the upper half body, a sitting posture detection result corresponding to the lower half body and a sitting posture detection result corresponding to the whole body of the current experience personnel; the three-path convolutional neural network is configured to encode input respectively to obtain a first convolution characteristic, a second convolution characteristic and a third convolution characteristic; splicing the first convolution characteristic and the second convolution characteristic, and fusing the spliced first convolution characteristic and the second convolution characteristic with the third convolution characteristic to obtain a fourth convolution characteristic; decoding and Relu activating the fourth convolution characteristic in sequence;
the machine learning model is a support vector machine regression model and is used for carrying out support vector regression operation based on fourth convolution characteristics after Relu activation processing and in combination with the human body parameters, the position parameters of the instrument and the relative movement parameters to obtain a standard sitting posture.
When the sitting posture prediction model is trained, the corresponding loss function is as follows:
Figure DEST_PATH_IMAGE035
(5)
wherein,
Figure 617185DEST_PATH_IMAGE036
representing the corresponding loss function of the sitting posture prediction model,
Figure DEST_PATH_IMAGE037
represents the number of channels of the three-channel convolutional neural network,
Figure 293017DEST_PATH_IMAGE038
Figure DEST_PATH_IMAGE039
respectively represents the prediction result and the truth label of the standard sitting posture output by the sitting posture prediction model,
Figure 780368DEST_PATH_IMAGE040
representing preset weight parameters.
After the current experience personnel sit well, before not beginning experience, the position of sitting of the current experience personnel is further finely adjusted based on the position of sitting prediction model that is pre-constructed, the accurate adjustment of the position of sitting is realized, and the experience of the user is further improved.
In addition, in order to avoid an unexpected situation in the fine adjustment process, whether the step of fine adjustment of the sitting posture of the current experience personnel is executed is judged according to the following conditions: whether the motion speed of the current full-motion simulator is zero or not; whether the current full-motion simulator adopts a braking measure or not; whether the safety belt of an experiential person of the current full-motion simulator is in a fastening state or not;
and if and only if the three conditions are met, carrying out sitting posture fine adjustment on the current experience personnel.
After step S900, the method further includes automatically adjusting the tightness degree of the five-point type seat belt, specifically:
acquiring a project to be experienced by current experience personnel, acquiring the maximum jolt degree and the maximum acceleration in the project to be experienced, and further calculating the tightness degree of the full-motion simulator five-point type safety belt;
Figure DEST_PATH_IMAGE041
wherein
Figure 908861DEST_PATH_IMAGE042
the initial tightness degree of the five-point type safety belt corresponding to the item to be experienced is shown,
Figure DEST_PATH_IMAGE043
for the standard tightness of the five-point safety belt under the current experience item set in the flight training outline or the SOP,
Figure 398748DEST_PATH_IMAGE044
Figure DEST_PATH_IMAGE045
indicating a presetThe weight of the weight is calculated,
Figure 929087DEST_PATH_IMAGE046
Figure DEST_PATH_IMAGE047
representing acceleration and bumping degree; when the acceleration and the jerk exceed the set threshold range, the acceleration and the jerk are measured
Figure 354383DEST_PATH_IMAGE048
Otherwise, otherwise
Figure DEST_PATH_IMAGE049
After the current experience person fastens the five-point safety belt, acquiring a pressure value of a fastening part of the five-point safety belt to the belly of the current experience person, and adjusting the initial tightness degree based on the pressure value; the specific calculation process is as follows:
Figure 459918DEST_PATH_IMAGE050
wherein, in the process,
Figure DEST_PATH_IMAGE051
the pressure values are represented by a pressure value,
Figure 487917DEST_PATH_IMAGE052
indicating the final tightness of the five-point harness,
Figure DEST_PATH_IMAGE053
representing preset weights. When the acceleration pressure value exceeds the set threshold value range, then
Figure 872762DEST_PATH_IMAGE054
Otherwise
Figure DEST_PATH_IMAGE055
A system for detecting and adjusting a sitting posture of a fully-automatic simulator based on image recognition according to a second embodiment of the present invention is shown in fig. 2, and includes: the system comprises image acquisition equipment, a remote server and sitting posture adjusting equipment; the image acquisition equipment, the remote server and the sitting posture adjusting equipment are in communication connection
The image acquisition equipment is configured to acquire a whole-body image of a person at the head of a queue in a person queuing queue to be experienced by the full-motion simulator as an input image; the whole body image comprises a set reference object; the system is also configured to acquire a sitting posture image of the current experience personnel and an in-cabin environment image of the full-motion simulator cockpit when detecting that the new experience personnel are in the process of fastening the safety belt;
the remote server includes: the system comprises a first preprocessing module, a second preprocessing module, a joint point extracting module, an initial sitting posture acquiring module, a sitting posture detecting module, a parameter extracting module and a standard sitting posture acquiring module;
the first preprocessing module is configured to resample the input image by a difference method and preprocess the resampled input image to obtain a preprocessed whole body image;
the joint point extraction module is configured to extract a candidate region of a human body in the preprocessed whole-body image, input a pre-constructed key point detection network, acquire human body joint point information, and generate human body parameters by combining information of a reference object in the preprocessed whole-body image; the human body parameters comprise height, arm length, leg length, head length, shoulder width and crotch width;
the initial sitting posture acquisition module is configured to acquire an upper body proportion as a first proportion through a pre-constructed body proportion calculation method based on the human body parameters; calculating a difference value between the first proportion and the upper body proportion of the first person stored in the database, and selecting historical sitting posture adjustment data of the first person corresponding to the minimum difference value as initial sitting posture adjustment data; the first person is a person with the height within a set range and sends the person to sitting posture adjusting equipment of the full-motion simulator;
the second preprocessing module is configured to resample and preprocess the sitting posture image and the cabin environment image to obtain a preprocessed sitting posture image and a preprocessed cabin environment image;
the sitting posture detection module is configured to extract a candidate area of a human body in the preprocessed sitting posture image, input a pre-constructed key point detection network, acquire a human body joint point and a corresponding coordinate thereof, and further obtain a sitting posture detection result of the current experience personnel;
the parameter extraction module is configured to extract position parameters and relative movement parameters of each instrument and equipment based on the preprocessed images of the environment in the cabin; the position parameters of the instrument equipment comprise position information of an instrument panel, position information of a control lever and position information of a foot brake; the relative movement parameters comprise the relative distance of the current experience personnel capable of moving back and forth, left and right, up and down;
the standard sitting posture acquisition module is configured to acquire a standard sitting posture through a pre-constructed sitting posture prediction model based on the sitting posture detection result and in combination with the human body parameter, the position parameter of the instrument and the relative movement parameter, and send the standard sitting posture to sitting posture adjustment equipment;
the sitting posture adjusting equipment is configured to adjust the position according to the initial sitting posture adjusting data and remind the person to be experienced of preparing boarding experience after detecting that the experienced person in the full-motion simulator leaves; and the sitting posture adjusting device is also configured to perform sitting posture fine adjustment on the current experience personnel according to the standard sitting posture.
It should be noted that, the system for detecting and adjusting the sitting posture of the fully-automatic simulator based on image recognition provided in the foregoing embodiment is only illustrated by dividing the functional modules, and in practical applications, the functions may be allocated to different functional modules according to needs, that is, the modules or steps in the embodiment of the present invention are further decomposed or combined, for example, the modules in the foregoing embodiment may be combined into one module, or may be further split into a plurality of sub-modules, so as to complete all or part of the functions described above. The names of the modules and steps involved in the embodiments of the present invention are only for distinguishing the modules or steps, and are not to be construed as unduly limiting the present invention.
An electronic apparatus according to a third embodiment of the present invention includes: at least one processor; and a memory communicatively coupled to at least one of the processors; the storage stores instructions which can be executed by the processor, and the instructions are used for being executed by the processor to realize the sitting posture detection and adjustment method for the full-motion simulator based on the image recognition.
A computer readable storage medium according to a fourth embodiment of the present invention stores computer instructions for being executed by a computer to implement the sitting posture detecting and adjusting method for a full-motion simulator based on image recognition.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes and related descriptions of the storage device and the processing device described above may refer to the corresponding processes in the foregoing method examples, and are not described herein again.
Referring now to FIG. 4, there is illustrated a block diagram of a computer system suitable for use as a server in implementing embodiments of the method, system, and apparatus of the present application. The server shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 4, the computer system includes a Central Processing Unit (CPU) 401 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage section 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for system operation are also stored. The CPU 401, ROM 402, and RAM 403 are connected to each other via a bus 404. An Input/Output (I/O) interface 405 is also connected to the bus 404.
The following components are connected to the I/O interface 405: an input portion 306 including a keyboard, a mouse, and the like; an output section 407 including a Display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage section 408 including a hard disk and the like; and a communication section 409 including a Network interface card such as a LAN (Local Area Network) card, a modem, and the like. The communication section 409 performs communication processing via a network such as the internet. A drive 410 is also connected to the I/O interface 405 as needed. A removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 410 as necessary, so that a computer program read out therefrom is mounted into the storage section 408 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 409, and/or installed from the removable medium 411. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 401. It should be noted that the computer readable medium mentioned above in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The terms "first," "second," and the like are used for distinguishing between similar elements and not necessarily for describing or implying a particular order or sequence.
The terms "comprises," "comprising," or any other similar term are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
So far, the technical solutions of the present invention have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present invention is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the invention, and the technical scheme after the changes or substitutions can fall into the protection scope of the invention.

Claims (10)

1. A sitting posture detection and adjustment method for a full-motion simulator based on image recognition is characterized by comprising the following steps:
s100, collecting a whole body image of a person at the head of a queue in a person queuing queue to be experienced by the full motion simulator as an input image; the whole body image comprises a set reference object;
s200, resampling the input image by a difference method, and preprocessing the resampled input image to obtain a preprocessed whole-body image;
s300, extracting a candidate region of a human body in the preprocessed whole-body image, inputting a pre-constructed key point detection network, acquiring human body joint point information, and generating human body parameters by combining the information of a reference object in the preprocessed whole-body image; the human body parameters comprise height, arm length, leg length, head length, shoulder width and crotch width;
s400, acquiring an upper body proportion as a first proportion through a pre-constructed body proportion calculation method based on the human body parameters; calculating a difference value between the first proportion and the upper body proportion of the first person stored in the database, and selecting historical sitting posture adjustment data of the first person corresponding to the minimum difference value as initial sitting posture adjustment data; the first person is a person with a height within a set range;
s500, when detecting that the experience personnel in the full-motion simulator leaves, adjusting the data adjusting position according to the initial sitting posture, and reminding the personnel to be experienced to prepare boarding experience;
s600, when detecting that a new experience person is in the process of fastening a safety belt, acquiring a sitting posture image of the current experience person and an in-cabin environment image of a full-motion simulator cockpit, and performing resampling and preprocessing to obtain a preprocessed sitting posture image and a preprocessed in-cabin environment image;
s700, extracting a candidate area of the human body in the preprocessed sitting posture image, inputting a pre-constructed key point detection network, and acquiring human body joint points and corresponding coordinates thereof to further obtain a sitting posture detection result of the current experience personnel;
s800, extracting position parameters and relative movement parameters of each instrument and equipment based on the preprocessed cabin environment image; the position parameters of the instrument equipment comprise position information of an instrument panel, position information of a control lever and position information of a foot brake; the relative movement parameters comprise the relative distance of the current experience personnel capable of moving back and forth, left and right and up and down;
s900, based on the sitting posture detection result, combining the human body parameters, the position parameters of the instrument and the relative movement parameters, and obtaining a standard sitting posture through a pre-constructed sitting posture prediction model; and according to the standard sitting posture, finely adjusting the sitting posture of the current experience personnel.
2. The image recognition-based sitting posture detection and adjustment method for the full-motion simulator according to claim 1, wherein the preprocessing comprises shadow removal, geometric correction, image enhancement, gaussian blur smoothing processing;
the image enhancement method is affine transformation, and comprises the following steps:
Figure DEST_PATH_IMAGE002
wherein,
Figure DEST_PATH_IMAGE004
representing the image before the affine transformation,
Figure DEST_PATH_IMAGE006
representing the image after the affine transformation,
Figure DEST_PATH_IMAGE008
a matrix representing an affine transformation is represented,
Figure DEST_PATH_IMAGE010
Figure DEST_PATH_IMAGE012
a matrix of rotations is represented, which is,
Figure DEST_PATH_IMAGE014
indicating the set translation amount.
3. The image recognition-based sitting posture detection and adjustment method for the full-motion simulator according to claim 1, wherein the key point detection network is constructed based on a convolution processing module, a feature extractor, a feature processing module, a residual error processing module and a discrimination module which are connected in sequence;
the convolution processing module is configured to extract convolution characteristics of the candidate area through the VGG16 network, and takes the output of the 10 th layer of the convolution layer in the VGG16 network as a first characteristic and takes the output of the 13 th layer of the convolution layer in the VGG16 network as a second characteristic;
the feature extractor is configured to fuse the first feature and the second feature, and perform convolution and Sigmoid operation after the fusion to obtain a third feature;
the characteristic processing module is a U-Net network, and the input of the characteristic processing module is a third characteristic; features of the same scale of the encoder and the decoder in the U-Net network are subjected to jumping connection, and the features output by the decoder are converted into one-dimensional features;
the Residual error processing module is constructed based on a Residual Network; the input of the Residual Network is the one-dimensional characteristic output and converted by the U-Net Network decoder, and the one-dimensional characteristic is configured to obtain a thermodynamic diagram of a corresponding joint point;
the judging module is a CRF model, and the input of the CRF model is the output of a Residual Network; and the CRF model is used for acquiring the probability distribution of the occurrence of each joint point.
4. The image recognition-based sitting posture detection and adjustment method for a fully-automatic simulator according to claim 1, wherein the key point detection network has a loss function during training as follows:
Figure DEST_PATH_IMAGE016
wherein,
Figure DEST_PATH_IMAGE018
representing the corresponding loss function of the key point detection network,
Figure DEST_PATH_IMAGE020
representing the order of an encoder in a U-Net network
Figure DEST_PATH_IMAGE022
The characteristics of the input of each convolution block,
Figure DEST_PATH_IMAGE024
indicating the second in the decoder in the U-Net network
Figure 690777DEST_PATH_IMAGE022
The characteristics of the output of each convolution block,
Figure DEST_PATH_IMAGE026
representing the number of convolutional blocks in the U-Net network encoder,
Figure DEST_PATH_IMAGE028
Figure DEST_PATH_IMAGE030
a regularization parameter is represented as a function of,
Figure DEST_PATH_IMAGE032
a cost function representing the likelihood estimate calculation, i.e. a loss function for the CRF model,
Figure DEST_PATH_IMAGE034
Figure DEST_PATH_IMAGE036
representing the 1 st convolutional block, the 1 st convolutional block in the encoder in the U-Net network
Figure 934150DEST_PATH_IMAGE026
The characteristics of the input of each convolution block,
Figure DEST_PATH_IMAGE038
and (3) representing the characteristics of the output of the 1 st convolution block in the decoder in the U-Net network.
5. The sitting posture detecting and adjusting method for the full-motion simulator based on the image recognition as claimed in claim 3, wherein the upper body proportion is obtained by a pre-constructed body proportion calculating method based on the human body parameters, and the method comprises:
Figure DEST_PATH_IMAGE040
wherein,
Figure DEST_PATH_IMAGE042
the scale of the upper body is shown,
Figure DEST_PATH_IMAGE044
which represents the height of the person to be experienced,
Figure DEST_PATH_IMAGE046
the length of the leg is indicated as,
Figure DEST_PATH_IMAGE048
the length of the arm is shown as,
Figure DEST_PATH_IMAGE050
the length of the head is shown as,
Figure DEST_PATH_IMAGE052
which indicates the width of the shoulder,
Figure DEST_PATH_IMAGE054
the width of the crotch is indicated by the width of the crotch,
Figure DEST_PATH_IMAGE056
Figure DEST_PATH_IMAGE058
Figure DEST_PATH_IMAGE060
Figure DEST_PATH_IMAGE062
representing preset weights.
6. The image recognition-based sitting posture detection and adjustment method for the full-motion simulator according to claim 1, wherein the sitting posture prediction model is constructed based on a three-way convolutional neural network and a machine learning model;
the three-path convolutional neural network inputs a sitting posture detection result corresponding to the upper half body, a sitting posture detection result corresponding to the lower half body and a sitting posture detection result corresponding to the whole body of the current experience personnel; the three-path convolutional neural network is configured to encode input respectively to obtain a first convolution characteristic, a second convolution characteristic and a third convolution characteristic; splicing the first convolution characteristic and the second convolution characteristic, and fusing the spliced first convolution characteristic and the second convolution characteristic with the third convolution characteristic to obtain a fourth convolution characteristic; decoding and Relu activating the fourth convolution characteristic in sequence;
and the machine learning model is a support vector machine regression model and is used for carrying out support vector regression operation based on the fourth convolution characteristic after Relu activation processing and in combination with the human body parameter, the position parameter of the instrument and the relative movement parameter to obtain a standard sitting posture.
7. The image recognition-based sitting posture detection and adjustment method for the full-motion simulator according to claim 1, further comprising the step of judging whether to perform sitting posture fine adjustment on the current experiential person according to set conditions: the setting conditions are as follows: whether the motion speed of the current full-motion simulator is zero or not; whether the current full-motion simulator adopts braking measures or not; whether the safety belt of the experience personnel of the current full-motion simulator is in a fastening state or not;
and if and only if the three conditions are met, carrying out sitting posture fine adjustment on the current experience personnel.
8. A sitting posture detection and adjustment method system for a full-motion simulator based on image recognition is characterized by comprising the following steps: the system comprises image acquisition equipment, a remote server and sitting posture adjusting equipment; the image acquisition equipment and the remote server are in communication connection with the sitting posture adjusting equipment;
the image acquisition equipment is configured to acquire a whole body image of a person positioned at the head of a queue in a person queuing queue to be experienced by the full motion simulator as an input image; the whole-body image comprises a set reference object; the system is also configured to acquire a sitting posture image of the current experience personnel and an in-cabin environment image of a full-motion simulator cockpit when detecting that a new experience personnel is in the process of fastening a safety belt;
the remote server includes: the system comprises a first preprocessing module, a second preprocessing module, a joint point extraction module, an initial sitting posture acquisition module, a sitting posture detection module, a parameter extraction module and a standard sitting posture acquisition module;
the first preprocessing module is configured to resample the input image by a difference method and preprocess the resampled input image to obtain a preprocessed whole body image;
the joint point extraction module is configured to extract a candidate region of a human body in the preprocessed whole-body image, input a pre-constructed key point detection network, acquire human body joint point information, and generate human body parameters by combining information of a reference object in the preprocessed whole-body image; the human body parameters comprise height, arm length, leg length, head length, shoulder width and crotch width;
the initial sitting posture acquisition module is configured to acquire an upper body proportion as a first proportion through a pre-constructed body proportion calculation method based on the human body parameters; calculating a difference value between the first proportion and the upper body proportion of the first person stored in the database, and selecting historical sitting posture adjustment data of the first person corresponding to the minimum difference value as initial sitting posture adjustment data; the first person is a person with the height within a set range and sends the person to sitting posture adjusting equipment of the full-motion simulator;
the second preprocessing module is configured to resample and preprocess the sitting posture image and the cabin environment image to obtain a preprocessed sitting posture image and a preprocessed cabin environment image;
the sitting posture detection module is configured to extract a candidate region of a human body in the preprocessed sitting posture image, input a pre-constructed key point detection network, obtain human body joint points and corresponding coordinates thereof, and further obtain a sitting posture detection result of the current experience personnel;
the parameter extraction module is configured to extract position parameters and relative movement parameters of each instrument and equipment based on the preprocessed images of the environment in the cabin; the position parameters of the instrument equipment comprise position information of an instrument panel, position information of a control lever and position information of a foot brake; the relative movement parameters comprise the relative distance of the current experience personnel capable of moving back and forth, left and right and up and down;
the standard sitting posture acquisition module is configured to acquire a standard sitting posture through a pre-constructed sitting posture prediction model by combining the human body parameter, the position parameter of the instrument and the relative movement parameter based on the sitting posture detection result and sending the standard sitting posture to sitting posture adjustment equipment;
the sitting posture adjusting equipment is configured to adjust the position according to the initial sitting posture adjusting data and remind people to be experienced of preparing boarding experience after detecting that the experience people in the full-motion simulator leave; and the sitting posture adjusting device is also configured to perform sitting posture fine adjustment on the current experience personnel according to the standard sitting posture.
9. An electronic device, comprising:
at least one processor; and a memory communicatively coupled to at least one of the processors;
wherein the memory stores instructions executable by the processor for execution by the processor to implement the image recognition based sitting posture detection and adjustment method for a fully-automatic simulator of any of claims 1-7.
10. A computer-readable storage medium storing computer instructions for execution by a computer to implement the image recognition-based sitting posture detection and adjustment method for a fully-automatic simulator according to any one of claims 1 to 7.
CN202211154508.8A 2022-09-22 2022-09-22 Image recognition-based sitting posture detection and adjustment method for full-motion simulator Active CN115240231B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211154508.8A CN115240231B (en) 2022-09-22 2022-09-22 Image recognition-based sitting posture detection and adjustment method for full-motion simulator

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211154508.8A CN115240231B (en) 2022-09-22 2022-09-22 Image recognition-based sitting posture detection and adjustment method for full-motion simulator

Publications (2)

Publication Number Publication Date
CN115240231A true CN115240231A (en) 2022-10-25
CN115240231B CN115240231B (en) 2022-12-06

Family

ID=83667306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211154508.8A Active CN115240231B (en) 2022-09-22 2022-09-22 Image recognition-based sitting posture detection and adjustment method for full-motion simulator

Country Status (1)

Country Link
CN (1) CN115240231B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10338028A1 (en) * 2003-08-19 2005-03-17 Asf Thomas Industries Gmbh Detection of the position of a person sitting in a vehicle seat, by monitoring both a height and an additional parameter, especially a pump pressure used to adjust seat configuration
WO2020244846A1 (en) * 2019-06-03 2020-12-10 Thyssenkrupp Elevator Innovation Center S.A. Passenger detection system for passenger moving systems
CN112308012A (en) * 2020-11-13 2021-02-02 迈渥信息科技(上海)有限公司 Intelligent driving sitting posture prediction system based on cloud
US20210342613A1 (en) * 2020-10-22 2021-11-04 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for detecting an abnormal driving posture, device, vehicle and medium
CN114119913A (en) * 2020-08-27 2022-03-01 北京陌陌信息技术有限公司 Human body model driving method, device and storage medium
WO2022099824A1 (en) * 2020-11-16 2022-05-19 深圳技术大学 Human risk pose recognition method and system
CN114572068A (en) * 2022-01-28 2022-06-03 中国第一汽车股份有限公司 Electric seat adjusting method and device based on convolutional neural network and vehicle
CN115035547A (en) * 2022-05-31 2022-09-09 中国科学院半导体研究所 Sitting posture detection method, device, equipment and computer storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10338028A1 (en) * 2003-08-19 2005-03-17 Asf Thomas Industries Gmbh Detection of the position of a person sitting in a vehicle seat, by monitoring both a height and an additional parameter, especially a pump pressure used to adjust seat configuration
WO2020244846A1 (en) * 2019-06-03 2020-12-10 Thyssenkrupp Elevator Innovation Center S.A. Passenger detection system for passenger moving systems
CN114119913A (en) * 2020-08-27 2022-03-01 北京陌陌信息技术有限公司 Human body model driving method, device and storage medium
US20210342613A1 (en) * 2020-10-22 2021-11-04 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and apparatus for detecting an abnormal driving posture, device, vehicle and medium
CN112308012A (en) * 2020-11-13 2021-02-02 迈渥信息科技(上海)有限公司 Intelligent driving sitting posture prediction system based on cloud
WO2022099824A1 (en) * 2020-11-16 2022-05-19 深圳技术大学 Human risk pose recognition method and system
CN114572068A (en) * 2022-01-28 2022-06-03 中国第一汽车股份有限公司 Electric seat adjusting method and device based on convolutional neural network and vehicle
CN115035547A (en) * 2022-05-31 2022-09-09 中国科学院半导体研究所 Sitting posture detection method, device, equipment and computer storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
朱怡婕: "基于动作捕捉的汽车座椅坐姿自动化装配方法研究", 《自动化与仪器仪表》 *
李洪均等: "基于映射节点级联宽度学习的人体坐姿识别", 《南通大学学报(自然科学版)》 *

Also Published As

Publication number Publication date
CN115240231B (en) 2022-12-06

Similar Documents

Publication Publication Date Title
CN109176512A (en) A kind of method, robot and the control device of motion sensing control robot
CN114202629A (en) Human body model establishing method, system, equipment and storage medium
US11823345B2 (en) Example-based real-time clothing synthesis method
CN111539245B (en) CPR (CPR) technology training evaluation method based on virtual environment
CN109409199B (en) Micro-expression training method and device, storage medium and electronic equipment
US20210042700A1 (en) Index computation device, prediction system, progress prediction evaluation method, and program
CN114119911A (en) Human body model neural network training method, device and storage medium
CN114119907A (en) Fitting method and device of human body model and storage medium
CN107154071A (en) The method that Case-based Reasoning generates individual face body Model according to anthropological measuring size data
CN108491881A (en) Method and apparatus for generating detection model
CN110717972B (en) Transformer substation exception handling simulation system based on VR local area network online system
CN111368768A (en) Human body key point-based employee gesture guidance detection method
CN104598012A (en) Interactive advertising equipment and working method thereof
CN116310066A (en) Single-image three-dimensional human body morphology estimation method and application
CN115240231B (en) Image recognition-based sitting posture detection and adjustment method for full-motion simulator
Reverdy et al. Optimal marker set for motion capture of dynamical facial expressions
CN117593178A (en) Virtual fitting method based on feature guidance
JP6579353B1 (en) Information processing apparatus, information processing method, dimension data calculation apparatus, and product manufacturing apparatus
CN114119913A (en) Human body model driving method, device and storage medium
CN108960191B (en) Multi-mode fusion emotion calculation method and system for robot
CN111310655A (en) Human body action recognition method and system based on key frame and combined attention model
CN115731356A (en) Passenger behavior modeling and data enhancement method for dragging elevator in virtual-real interaction scene
JP2023005567A (en) Machine learning program, machine learning method, and facial expression recognition device
CN110148202B (en) Method, apparatus, device and storage medium for generating image
CN115457104B (en) Human body information determination method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant