CN115240385A - Method and device for detecting placement position of mobile phone - Google Patents

Method and device for detecting placement position of mobile phone Download PDF

Info

Publication number
CN115240385A
CN115240385A CN202210816433.9A CN202210816433A CN115240385A CN 115240385 A CN115240385 A CN 115240385A CN 202210816433 A CN202210816433 A CN 202210816433A CN 115240385 A CN115240385 A CN 115240385A
Authority
CN
China
Prior art keywords
mobile phone
angle
image
face
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210816433.9A
Other languages
Chinese (zh)
Inventor
林乐新
姜小康
周超
张康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shanhui Technology Co ltd
Original Assignee
Shenzhen Shanhui Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shanhui Technology Co ltd filed Critical Shenzhen Shanhui Technology Co ltd
Priority to CN202210816433.9A priority Critical patent/CN115240385A/en
Publication of CN115240385A publication Critical patent/CN115240385A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Abstract

The invention discloses a mobile phone placement position detection method, which comprises the steps of acquiring a mobile phone inclination angle of a mobile phone by acquiring a three-axis acceleration arranged in a smart phone, acquiring a face angle of a face from a face image, acquiring a neck bending angle according to the mobile phone inclination angle and the face angle, acquiring gyroscope data arranged in the mobile phone, calculating by combining the data acquired by a three-axis acceleration sensor to obtain a rotation radius component, an angular velocity amplitude and an attitude angle of the mobile phone, extracting characteristics, classifying to obtain a mobile phone placement position, and classifying a low head attitude based on the mobile phone placement position and the neck bending angle to determine bad behaviors of the low head angle. The problem that the neck bending angle is difficult to obtain visually and accurately is solved, behavior data information can be accurately obtained through behavior recognition of the smart phone, the smart phone can be used in most scenes and in actual environments, normal life of people cannot be influenced, good perception concealment is achieved, and the experience degree of the use of the smart phone is improved.

Description

Method and device for detecting placement position of mobile phone
Technical Field
The invention belongs to the technical field of mobile phone detection, and particularly relates to a method and a device for detecting the placement position of a mobile phone.
Background
Along with the gradual increase of the dependence degree of people on the smart phone, the smart phone becomes a necessary product to be carried about, along with the continuous development of the technology, most smart phones on the market at present integrate an accelerometer and a gyroscope, and the inconvenience caused by wearing of fixed position detection equipment can be effectively avoided through the detection mode of the smart phone. The mobile phone is placed at different positions, the signal values generated by the sensor are different, and the position of the mobile phone also provides important information in behavior recognition of the smart phone. With the improvement of the processing capability and the context sensing capability of the smart phone, more complex behavior recognition can be supported, the behavior recognition is changed from a single position to behavior recognition of different sensor positions which can be applied to more scenes, however, unhealthy behaviors such as looking down at the mobile phone for a long time are obtained in the use process of the mobile phone. When a user looks at a mobile phone and the like during walking, the health of the user is harmed, and even the personal safety is threatened. Generally, behavior recognition using computer vision has a better recognition effect theoretically, but the recognition effect can be influenced by many external factors, such as the definition degree of a picture, the illumination condition when the picture is acquired, and the like, the user of the wearable device is moderate and poor, the acquired data has great limitation, and the equipment is often high in manufacturing cost.
Disclosure of Invention
In view of this, the invention provides a method and a system for detecting a placement position of a mobile phone, which are implemented by combining the problems of a plurality of classifiers and identification of mobile phone use behaviors in different scenes, and identifying behavior activities and mobile phone prevention positions by adopting an integrated classifier according to the placement position of the mobile phone, so as to identify bad behaviors in the use process of the mobile phone.
In a first aspect, the present invention provides a method for detecting a placement position of a mobile phone, including the following steps:
acquiring a mobile phone inclination angle of a mobile phone used for three-axis acceleration acquisition built in the smart phone, wherein the mobile phone inclination angle is extracted from a three-axis acceleration sensor, and a front camera of the smart phone captures a human face image;
acquiring a face angle of a face from a face image, and obtaining a neck bending angle according to a mobile phone inclination angle and the face angle, wherein the face angle is obtained by preprocessing the face image, extracting features, positioning pupils and positioning mouths;
acquiring built-in gyroscope data of the mobile phone, calculating by combining data acquired by a three-axis acceleration sensor to obtain a rotation radius component, an angular velocity amplitude and an attitude angle of the mobile phone, and extracting characteristics to classify the rotation radius component, the angular velocity amplitude and the attitude angle to obtain a placing position of the mobile phone;
and classifying the head lowering postures based on the placing position of the mobile phone and the neck bending angle so as to determine bad behaviors of the head lowering angles to perform early warning.
As above-mentioned technical scheme's further improvement, carry out the early warning to the bad action of low head gesture classification in order to confirm the low head angle based on cell-phone locating position and neck bend angle, include:
when the number of detected faces in the face image is 1, starting to acquire an acceleration signal of the mobile phone, and calculating an inclination angle of the mobile phone and detecting and positioning positions of pupils and mouths by operator edges;
acquiring a pupil distance and a mouth length to obtain a face angle, and acquiring a neck bending angle according to the sum of the mobile phone inclination angle and the face angle;
when the neck bending angle is less than 30 degrees, judging the neck to be in normal behavior; if the neck bending angle is greater than or equal to 30 degrees, the behavior is determined to be bad.
As a further improvement of the above technical solution, the neck bending angle is obtained according to the sum of the mobile phone inclination angle and the face angle:
presetting a neck bending angle NA as an included angle between a connecting line of a first section of cervical vertebra central point and an ear central point and a vertical plane;
presetting a mobile phone inclination angle PA as an included angle between a plane of a smart phone and a horizontal plane, when the sight of a mobile phone user is vertical to a mobile phone screen, namely a face plane is parallel to the mobile phone screen, a neck bending angle is equal to the mobile phone inclination angle, the included angle formed by the face plane and the mobile phone screen plane is a face angle, NA = PA, FA =0, and then the neck bending angle is expressed as NA = PA + FA;
when the face angle is larger than 0, the sum of the mobile phone inclination angle and the face angle is the neck bending angle, namely NA = PA + FA; when the face angle is small by 0, the sum of the mobile phone inclination angle and the face angle is the neck bending angle, namely NA = PA + FA, and the neck bending angle NA is larger and is healthier.
As a further improvement of the above technical solution, the process of measuring the tilt angle of the mobile phone includes:
setting a three-dimensional coordinate system of the smart phone, and selecting a body to establish a Cartesian coordinate system O xyz The coordinate system moves along with the body movement, and x, y and z axes form a right-hand rectangular system, wherein the x axis is aligned with a short body axis of the smart phone, the y axis is aligned with a long body axis of the smart phone, the z axis is aligned with a normal vector of a plane formed by the x axis and the y axis, and when the smart phone is flatly placed on a desktop, the direction of the z axis is the same as the direction of gravity;
setting the reading of the triaxial acceleration sensor as G according to the gravity action of the acceleration sensor when the acceleration sensor is placed statically p The expression is
Figure BDA0003742598570000031
Wherein G is px 、G py 、G pz Are each G p In the three-dimensional coordinate of the smart phone, the components on the x, y and z axes are defined, R is a rotation matrix of the smart phone relative to the terrestrial coordinate system, g is the gravity acceleration, a r Is the linear acceleration of the smart phone, if the smart phone is in a static state, the linear acceleration a r =0,G p Has the unit of 1G, so G p Is expressed as
Figure BDA0003742598570000032
The inclination angle p of the smart phone is an included angle between a plane formed by the x axis and the y axis of the phone body and the horizontal plane, and the inclination angle p of the smart phone can pass through the expression G p *
Figure BDA0003742598570000033
As a further improvement of the above technical solution, the facial angle is obtained by preprocessing a face image, extracting features, positioning pupils, and positioning mouths, and includes:
adopting a Canny operator to carry out edge detection on the mouth region, sequentially scanning the detected mouth region in the horizontal direction, and respectively determining the positions of the left and right mouth corners in the image region;
adopting Gaussian filtering for image preprocessing, calculating points with prominent gray change in adjacent points of gradient amplitude, screening out edge points of the mouth through a threshold value of the gradient amplitude, and positioning the mouth, wherein the threshold value range [0,1] is selected to be 0.72 to obtain a clear mouth edge image;
the coordinates of the left and right mouth corners in the mouth area are respectively (x) 1 ,y 1 )(x 2 ,y 2 ) The expression of the length distance Dm of the mouth
Figure BDA0003742598570000041
As a further improvement of the above technical solution, the process of pupil positioning includes:
the pixel values of the pupil and the eyebrow are lower than the pixel values of the peripheral areas of the pupil and the eyebrow, the pupil position is positioned by adopting gray scale integral projection, the coordinate of the pupil on the y axis of the human eye image is detected by adopting the gray scale integral projection in the vertical direction, and the coordinate of the pupil on the x axis of the human eye image is detected by adopting the gray scale integral projection in the horizontal direction;
the range of the image in the horizontal direction is set to [ x ] 3 ,x 4 ]In the vertical direction, the range is [ y 3 ,y 4 ]F (x, y) is the gray value of the pixel point (x, y) in the image, and I (x, y) is the gray value of the pixel point (x, y) in the image, then the gray integral projection s of the image in the horizontal direction and the vertical direction v (x)、S v The expressions of (y) are respectively
Figure BDA0003742598570000042
The coordinates of the pupils of the two eyes in the human eye area are respectively (x) 3 ,y 3 ) And (x) 4 ,y 4 ) The interpupillary distance D e Is expressed as
Figure BDA0003742598570000043
As the further improvement of the technical scheme, the head lowering posture is classified based on the placing position of the mobile phone and the bending angle of the neck, and the method comprises the following steps:
constructing a strong classifier consisting of a plurality of weak classifiers, constructing a cascade classifier by the trained strong classifiers, and overlapping a plurality of simple classifiers according to a preset rule in an iterative manner to form a strong classifier;
the cascade of classifiers consisting of a single strong classifier e 1 、e 2 ...e n The image classification method comprises the steps that the images to be detected are recognized as face images through all classifiers, in the cascade classifier, the features of the classifier close to the front are few, the accuracy is lower when the recognition time is shorter, the images are recognized in a short time, the features of the classifier close to the back are many, the recognition time is long, the accuracy is high, and the images after being screened are recognized more accurately.
As a further improvement of the above technical solution, acquiring a mobile phone inclination angle of the smart phone for acquiring a three-axis acceleration, which is built in the smart phone, includes:
converting RGB (R, G, B) of any point in an image into RGB (Gray, gray, gray) by adopting a shifting method, and converting the RGB (R, G, B) into RGB (Gray, gray, gray) by an expression of Gray = (R x 19595+ G x 38469+ B x 7472) > >16, wherein R, G and B respectively represent red, green and blue, and Gray represents a converted Gray value;
and performing signal transformation of image illumination intensity on the obtained original image, wherein the image intensity correction comprises one or more of histogram equalization, gamma transformation, local contrast enhancement and homomorphic filtering methods.
As a further improvement of the above technical solution, the image quality is improved by selectively stretching a certain gray scale interval in the image, if two points A (x) exist 5 ,y 5 ) And B (x) 6 ,y 6 ) Drawing x by transformation function 5 To x 6 Gray scale stretching between to y 5 To y 6 When the slope of the connecting line of the two points is greater than 1, the image is dark, and the gray scale interval is stretched through gray scale stretching to improve the image quality; when the slope of the connecting line of the two points is smaller than 1, the image is bright and the image quality is improved through gray level compression.
In a second aspect, the present invention further provides a device for detecting a placement position of a mobile phone, including:
the mobile phone comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a mobile phone inclination angle of a built-in three-axis acceleration acquisition using mobile phone of the smart phone, the mobile phone inclination angle is extracted from a three-axis acceleration sensor, and a front camera of the smart phone captures a human face image;
the second acquisition module is used for acquiring the face angle of a human face from the human face image and obtaining the neck bending angle according to the mobile phone inclination angle and the face angle, wherein the face angle is obtained by preprocessing the human face image, extracting features, positioning pupils and positioning mouths;
the computing module is used for acquiring gyroscope data built in the mobile phone, computing the gyroscope data by combining data acquired by the triaxial acceleration sensor to obtain a rotation radius component, an angular velocity amplitude and an attitude angle of the mobile phone, and extracting characteristics and classifying the rotation radius component, the angular velocity amplitude and the attitude angle to obtain a placement position of the mobile phone;
and the processing module is used for carrying out early warning on bad behaviors of classifying the head-lowering postures based on the placing position of the mobile phone and the neck bending angle so as to determine the head-lowering angle.
The invention provides a method and a device for detecting the placement position of a mobile phone, which are characterized in that the inclination angle of the mobile phone is acquired by acquiring the three-axis acceleration arranged in an intelligent mobile phone, the face angle of a human face is acquired from a human face image, the neck bending angle is acquired according to the inclination angle and the face angle of the mobile phone, the gyroscope data arranged in the mobile phone is acquired and is combined with the data acquired by a three-axis acceleration sensor to calculate to obtain the rotation radius component, the angular velocity amplitude and the attitude angle of the mobile phone, the placement position of the mobile phone is obtained by extracting characteristics and classifying the data, and the low head posture is classified based on the placement position of the mobile phone and the neck bending angle to determine the bad behavior of the low head angle for early warning. The problem that the neck bending angle is difficult to obtain visually and accurately is solved, behavior data information can be accurately acquired through behavior recognition of the smart phone, the smart phone can be better adapted to environmental requirements, the smart phone can be used in most scenes and actual environments, normal life of people cannot be influenced, good perception concealment is achieved, and the experience degree of use of the smart phone is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a flowchart of a method for detecting a placement position of a mobile phone according to the present invention;
FIG. 2 is a diagram illustrating the process of determining bad behavior according to the present invention;
FIG. 3 is a process diagram of the present invention for low head pose classification;
fig. 4 is a block diagram of the mobile phone placement position detection device of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. In contrast, when an element is referred to as being "directly on" another element, there are no intervening elements present. The terms "vertical," "horizontal," "left," "right," and the like are used herein for purposes of illustration only.
In the present invention, unless otherwise expressly stated or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally formed; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or may be connected through the use of two elements or the interaction of two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
Referring to fig. 1, the invention provides a method for detecting a placement position of a mobile phone, which comprises the following steps:
s11: acquiring a mobile phone inclination angle of a mobile phone used by a built-in three-axis acceleration acquisition of the smart phone, wherein the mobile phone inclination angle is extracted from a three-axis acceleration sensor, and a front camera of the smart phone captures a human face image;
s12: acquiring a face angle of a face from a face image, and acquiring a neck bending angle according to a mobile phone inclination angle and the face angle, wherein the face angle is acquired by preprocessing the face image, extracting features, positioning pupils and positioning mouths;
s13: acquiring built-in gyroscope data of the mobile phone, calculating the built-in gyroscope data by combining data acquired by a three-axis acceleration sensor to obtain a rotation radius component, an angular velocity amplitude and an attitude angle of the mobile phone, extracting characteristics, and classifying the rotation radius component, the angular velocity amplitude and the attitude angle to obtain a placing position of the mobile phone;
s14: and classifying the head lowering postures based on the placing position of the mobile phone and the neck bending angle so as to determine bad behaviors of the head lowering angles to perform early warning.
In this embodiment, the facial angle is obtained by preprocessing the face image, extracting features, positioning the pupil, and positioning the mouth, and includes: edge detection is carried out on the mouth region by adopting a Canny operator, the detected mouth region is sequentially scanned in the horizontal direction, and the positions of the left mouth corner and the right mouth corner in the image region are respectively determined; adopting Gaussian filtering to carry out image preprocessing and calculating the gray change in adjacent points of gradient amplitudeThe prominent points are screened out by gradient threshold value, and the mouth is positioned, wherein the range of the threshold value is [0,1]]Selecting a threshold value of 0.72 to obtain a clear mouth edge image; the coordinates of the left and right mouth corners in the mouth area are respectively (x) 1 ,y 1 )(x 2 ,y 2 ) The length of the mouth is then the distance D m Expression of (2)
Figure BDA0003742598570000081
Figure BDA0003742598570000082
Note that the process of pupil positioning includes: the pixel values of the pupil and the eyebrow are lower than the pixel values of the peripheral areas of the pupil and the eyebrow, the pupil position is positioned by adopting gray scale integral projection, the coordinate of the pupil on the y axis of the human eye image is detected by adopting the gray scale integral projection in the vertical direction, and the coordinate of the pupil on the x axis of the human eye image is detected by adopting the gray scale integral projection in the horizontal direction; setting the range of the image in the horizontal direction as [ x ] 3 ,x 4 ]In the vertical direction, the range is [ y 3 ,y 4 ]F (x, y) is the gray value of the pixel point (x, y) in the image, and I (x, y) is the gray value of the pixel point (x, y) in the image, then the gray integral projection s in the horizontal direction and the vertical direction of the image v (x)、S v The expressions of (y) are respectively
Figure BDA0003742598570000083
Figure BDA0003742598570000084
The coordinates of the pupils of the two eyes in the human eye area are respectively (x) 3 ,y 3 ) And (x) 4 ,y 4 ) The interpupillary distance D e Is expressed as
Figure BDA0003742598570000085
It should be understood that the bad behaviors mainly refer to some unhealthy behaviors in the use process of the mobile phone, namely the bad behaviors, the bad behaviors based on the position of the mobile phone include behaviors that the mobile phone is held at the chest position under four scenes of walking, going upstairs, going downstairs, doing public transportation and the like, or bad behaviors facing to the lower head, the bad behaviors comprise behaviors that the bending angle of the neck is more than or equal to 30 degrees when the mobile phone is used, a bad behavior identification frame of the adopted smart phone is mainly divided into four layers, specifically divided into a sensor layer, an original data layer, a behavior mark layer and a bad behavior identification layer, and the bad behaviors at the position of the mobile phone and the bad behaviors facing to the lower head are respectively identified. The three-axis acceleration sensor and the front camera which are arranged in the smart phone are used as sensing equipment for acquiring data and images in the sensor layer, the original data layer acquires data from the sensor layer, and the data mainly comprise two types of data, namely three-axis acceleration data based on time sequence and face images. The behavior mark layer is used as an input and applied to the bad behavior distinguishing layer, and the main behavior marks comprise daily behaviors, the placement position of the mobile phone and the head lowering posture. The bad behavior judging layer is used for judging whether the behavior signs of the third layer are bad behaviors or not and distinguishing the bad behaviors from the healthy behaviors, wherein the bad behaviors are judged when the mobile phone is held at the chest in a walking, stair climbing, stair descending and bus sitting scene or the lower head angle exceeds 30 degrees, and other conditions are healthy behaviors.
Referring to fig. 2, based on cell-phone locating place and neck bend angle carry out the early warning to the bad action of low head gesture classification in order to confirm the low head angle, include:
s21: when the number of the faces in the face image is detected to be 1, starting to acquire an acceleration signal of the mobile phone and calculating the inclination angle of the mobile phone and the operator edge to detect and position the pupil and the mouth;
s22: acquiring a pupil distance and a mouth length to obtain a face angle, and acquiring a neck bending angle according to the sum of the mobile phone inclination angle and the face angle;
s23: when the neck bending angle is less than 30 degrees, judging the neck to be in normal behavior; if the neck bending angle is greater than or equal to 30 degrees, the behavior is determined to be bad.
In this embodiment, the neck bending angle is obtained according to the sum of the mobile phone inclination angle and the face angle: presetting a neck bending angle NA as an included angle between a connecting line of a first section of cervical vertebra central point and an ear central point and a vertical plane; the preset mobile phone inclination angle PA is an included angle between a plane of the smart phone and a horizontal plane, when the sight line of a mobile phone user is perpendicular to a mobile phone screen, namely a face plane is parallel to the mobile phone screen, a neck bending angle is equal to the mobile phone inclination angle, the included angle formed by the face plane and the mobile phone screen plane is a face angle, NA = PA, and FA =0, then the neck bending angle is expressed as NA = PA + FA; when the face angle is larger than 0, the sum of the mobile phone inclination angle and the face angle is the neck bending angle, namely NA = PA + FA; when the face angle is small by 0, the sum of the mobile phone inclination angle and the face angle is the neck bending angle, namely NA = PA + FA, and the neck bending angle NA is larger and is healthier.
It should be noted that the process of measuring the tilt angle of the mobile phone includes: setting a three-dimensional coordinate system of the smart phone, and selecting a machine body to establish a Cartesian coordinate system O xyz The coordinate system moves along with the body movement, and x, y and z axes form a right-hand rectangular system, wherein the x axis is aligned with a short body axis of the smart phone, the y axis is aligned with a long body axis of the smart phone, the z axis is aligned with a normal vector of a plane formed by the x axis and the y axis, and when the smart phone is flatly placed on a desktop, the direction of the z axis is the same as the direction of gravity; setting the reading of the triaxial acceleration sensor as G according to the gravity action of the acceleration sensor when the acceleration sensor is placed statically p Of the formula
Figure BDA0003742598570000091
Figure BDA0003742598570000101
Wherein G is px 、G py 、G pz Are each G p In the three-dimensional coordinate of the smart phone, the components on the x, y and z axes are defined, R is a rotation matrix of the smart phone relative to the terrestrial coordinate system, g is the gravity acceleration, a r Is the linear acceleration of the smart phone, if the smart phone is in a static state, the linear acceleration a r =0,G p The unit of (b) is 1G, so G p Is expressed as
Figure BDA0003742598570000102
The inclination angle p of the smart phone is an included angle between a plane formed by the x axis and the y axis of the phone body and the horizontal plane, and the inclination angle p of the smart phone can pass through an expression
Figure BDA0003742598570000103
Figure BDA0003742598570000104
Referring to fig. 3, the low head posture is classified based on the placing position of the mobile phone and the bending angle of the neck, including:
s31: constructing a strong classifier consisting of a plurality of weak classifiers, constructing a cascade classifier by the trained strong classifiers, and overlapping a plurality of simple classifiers according to a preset rule in an iterative manner to form a strong classifier;
s32: the cascade of classifiers consisting of a single strong classifier e 1 、e 2 ...e n The image classification method comprises the steps that the images to be detected are recognized as face images through all classifiers, in the cascade classifier, the features of the classifier close to the front are few, the accuracy is lower when the recognition time is shorter, the images are recognized in a short time, the features of the classifier close to the back are many, the recognition time is long, the accuracy is high, and the images after being screened are recognized more accurately.
In this embodiment, acquire the cell-phone inclination that uses the cell-phone is gathered to built-in triaxial acceleration of smart mobile phone, include: converting RGB (R, G, B) of any point in an image into RGB (Gray, gray, gray) by adopting a shifting method, wherein the conversion is carried out by an expression of Gray = (R x 19595+ G x 38469+ B x 7472) >16, R, G and B respectively represent red, green and blue, and Gray represents a Gray value after conversion; and performing signal transformation of image illumination intensity on the obtained original image, wherein the image intensity correction comprises one or more of histogram equalization, gamma transformation, local contrast enhancement and homomorphic filtering.
It should be noted that the image quality is improved by selectively stretching a certain gray scale region in the image if two points a (x) exist 5 ,y 5 ) And B (x) 6 ,y 6 ) Drawing x by transformation function 5 To x 6 Gray scale stretching between to y 5 To y 6 When the slope of the connecting line of the two points is larger than 1, the image is dark, and the gray level interval is stretched through gray level stretching to improve the image quality; when the slope of the connecting line of the two points is smaller than 1, the image is brighter, the image quality is improved through gray level compression, and the accuracy of detecting the placing position of the mobile phone is improved.
Referring to fig. 4, the present invention further provides a device for detecting a placement position of a mobile phone, including:
the mobile phone comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a mobile phone inclination angle of a built-in three-axis acceleration acquisition using mobile phone of the smart phone, the mobile phone inclination angle is extracted from a three-axis acceleration sensor, and a front camera of the smart phone captures a human face image;
the second acquisition module is used for acquiring a face angle of a face from the face image and obtaining a neck bending angle according to the inclination angle of the mobile phone and the face angle, wherein the face angle is obtained by preprocessing the face image, extracting features, positioning pupils and positioning mouths;
the computing module is used for acquiring gyroscope data built in the mobile phone, computing the gyroscope data by combining data acquired by the triaxial acceleration sensor to obtain a rotation radius component, an angular velocity amplitude and an attitude angle of the mobile phone, and extracting characteristics and classifying the rotation radius component, the angular velocity amplitude and the attitude angle to obtain a placement position of the mobile phone;
and the processing module is used for classifying the head lowering postures based on the placing position of the mobile phone and the bending angle of the neck so as to determine the bad behavior of the head lowering angle to carry out early warning.
In this embodiment, the universality of the mobile phone application and the continuous development of the sensing technology are achieved, the behavior recognition of the smart phone is more and more concerned by people, the processing capability and the perception capability of the mobile device are also effectively and greatly improved, and more complex applications related to mobile phone perception can be supported. The people is on the scene and is equipped with six pockets and can place the cell-phone, two preceding trousers pockets of trousers, two back trousers pockets, two trousers pockets about the jacket because the cell-phone is put the angle difference, and the data of gathering also can have the difference, and for all kinds of actions of more complete and accurate discernment and cell-phone position, the concrete position of placing of cell-phone is: when the mobile phone is held in the chest position, the upper garment right pocket position and the trousers front-right pocket, the mobile phone screen is close to one side of the body, and the charging port is downward.
It should be noted that the process of acquiring the original data is often interfered by external environments, such as friction between the acquisition device and clothes, and slight jitter when the mobile phone is held by hand, which causes imperfection, noise and redundancy of the acquired original triaxial acceleration data, and the original data needs to be preprocessed in order to better improve the quality of the original data and improve the accuracy of subsequent feature extraction and identification. The median filter slides through original signals in sequence through a window with a fixed length, in each window, the median of a signal sequence in the window is used for replacing a sample value in the middle of a window sequence, the size of the window is usually odd, so that only one selected median can be ensured, the larger the value of the window is, the smoother the obtained result is, in the data of the activity behaviors of the human body, when the selected median filter is too large, the classification effect can be poor, and the main detection process comprises sensor signal acquisition. The method comprises the steps of data preprocessing and feature extraction, and adopts a classification model of an integrated classifier to detect two conditions of mobile phone position identification without distinguishing actions and mobile phone position identification with distinguishing actions respectively, and the effectiveness of the integrated classifier in identifying bad actions of the mobile phone position is detected by comparing the two conditions with five single classifiers.
In all examples shown and described herein, any particular value should be construed as merely exemplary, and not as a limitation, and thus other examples of example embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is specific and detailed, but not to be construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention.

Claims (10)

1. A method for detecting the placement position of a mobile phone is characterized by comprising the following steps:
acquiring a mobile phone inclination angle of a mobile phone used by a built-in three-axis acceleration acquisition of the smart phone, wherein the mobile phone inclination angle is extracted from a three-axis acceleration sensor, and a front camera of the smart phone captures a human face image;
acquiring a face angle of a face from a face image, and acquiring a neck bending angle according to a mobile phone inclination angle and the face angle, wherein the face angle is acquired by preprocessing the face image, extracting features, positioning pupils and positioning mouths;
acquiring built-in gyroscope data of the mobile phone, calculating the built-in gyroscope data by combining data acquired by a three-axis acceleration sensor to obtain a rotation radius component, an angular velocity amplitude and an attitude angle of the mobile phone, extracting characteristics, and classifying the rotation radius component, the angular velocity amplitude and the attitude angle to obtain a placing position of the mobile phone;
and classifying the head lowering postures based on the placing position of the mobile phone and the neck bending angle so as to determine bad behaviors of the head lowering angles to perform early warning.
2. The method of claim 1, wherein the pre-warning of bad behavior of classifying the heads-down postures based on the placement position of the mobile phone and the neck bending angle to determine the head-down angle comprises:
when the number of detected faces in the face image is 1, starting to acquire an acceleration signal of the mobile phone, and calculating an inclination angle of the mobile phone and detecting and positioning positions of pupils and mouths by operator edges;
acquiring a pupil distance and a mouth length to obtain a face angle, and acquiring a neck bending angle according to the sum of the mobile phone inclination angle and the face angle;
when the neck bending angle is less than 30 degrees, judging the neck to be in normal behavior; if the neck bending angle is greater than or equal to 30 degrees, the behavior is determined to be bad.
3. The method for detecting the placement position of a mobile phone according to claim 2, wherein the neck bending angle is obtained according to the sum of the mobile phone inclination angle and the face angle:
presetting a neck bending angle NA as an included angle between a connecting line of a central point of a first section of cervical vertebra and a central point of an ear and a vertical plane;
the preset mobile phone inclination angle PA is an included angle between a plane of the smart phone and a horizontal plane, when the sight line of a mobile phone user is perpendicular to a mobile phone screen, namely a face plane is parallel to the mobile phone screen, a neck bending angle is equal to the mobile phone inclination angle, the included angle formed by the face plane and the mobile phone screen plane is a face angle, NA = PA, and FA =0, then the neck bending angle is expressed as NA = PA + FA;
when the face angle is larger than 0, the sum of the mobile phone inclination angle and the face angle is the neck bending angle, namely NA = PA + FA; when the face angle is small by 0, the sum of the mobile phone inclination angle and the face angle is the neck bending angle, namely NA = PA + FA, and the neck bending angle NA is larger and is healthier.
4. The method for detecting the placement position of a mobile phone according to claim 2, wherein the process of measuring the inclination angle of the mobile phone comprises:
setting a three-dimensional coordinate system of the smart phone, and selecting a body to establish a Cartesian coordinate system O xyz The coordinate system moves along with the body movement, and x, y and z axes form a right-hand rectangular system, wherein the x axis is aligned with a short body axis of the smart phone, the y axis is aligned with a long body axis of the smart phone, the z axis is aligned with a normal vector of a plane formed by the x axis and the y axis, and when the smart phone is horizontally placed on a desktop, the direction of the z axis is the same as the gravity direction;
setting the reading of the triaxial acceleration sensor as G according to the gravity action of the acceleration sensor when the acceleration sensor is placed statically p Of the formula
Figure FDA0003742598560000021
Wherein G is px 、G py 、G pz Are each G p In the three-dimensional coordinate of the smart phone, the components on the x, y and z axes are defined, R is a rotation matrix of the smart phone relative to the terrestrial coordinate system, g is the gravity acceleration, a r Is the linear acceleration of the smart phone, if the smart phone is in a static state, the linear acceleration a r =0,G p Has the unit of 1G, so G p Is expressed as
Figure FDA0003742598560000022
The inclination angle p of the smart phone is an included angle between a plane formed by the x axis and the y axis of the phone body and the horizontal plane, and the inclination angle p of the smart phone can pass through the G of the expression p *
Figure FDA0003742598560000023
5. The method for detecting the placement position of a mobile phone according to claim 1, wherein the facial angle is obtained by preprocessing a face image, extracting features, positioning pupils and positioning mouths, and comprises the following steps:
adopting a Canny operator to carry out edge detection on the mouth region, sequentially scanning the detected mouth region in the horizontal direction, and respectively determining the positions of the left and right mouth corners in the image region;
adopting Gaussian filtering for image preprocessing, calculating points with prominent gray change in adjacent points of gradient amplitude, screening out edge points of the mouth through a threshold value of the gradient amplitude, and positioning the mouth, wherein the threshold value range [0,1] is selected to be 0.72 to obtain a clear mouth edge image;
the coordinates of the left and right mouth corners in the mouth area are respectively (x) 1 ,y 1 )(x 2 ,y 2 ) The length distance D of the mouth m Expression of (2)
Figure FDA0003742598560000031
6. The method for detecting the placement position of a mobile phone according to claim 5, wherein the process of pupil positioning comprises:
the pixel values of the pupil and the eyebrow are lower than the surrounding areas, the pupil position is positioned by adopting gray scale integral projection, the coordinate of the pupil on the y axis of the human eye image is detected by adopting the gray scale integral projection in the vertical direction, and the coordinate of the pupil on the x axis of the human eye image is detected by adopting the gray scale integral projection in the horizontal direction;
setting the range of the image in the horizontal direction as [ x ] 3 ,x 4 ]In the vertical direction, the range is [ y 3 ,y 4 ]F (x, y) is the gray value of the pixel point (x, y) in the image, and I (x, y) is the gray value of the pixel point (x, y) in the image, then the gray integral projection s in the horizontal direction and the vertical direction of the image v (x)、S v The expressions of (y) are respectively
Figure FDA0003742598560000032
The coordinates of the pupils of the two eyes in the human eye area are respectively (x) 3 ,y 3 ) And (x) 4 ,y 4 ) The interpupillary distance D e Is expressed as
Figure FDA0003742598560000033
7. The method for detecting a placement position of a mobile phone according to claim 1, wherein classifying the low head posture based on the placement position of the mobile phone and a neck bending angle comprises:
constructing a strong classifier consisting of a plurality of weak classifiers, constructing a cascade classifier by the trained strong classifiers, and overlapping a plurality of simple classifiers according to a preset rule in an iterative manner to form a strong classifier;
the cascade of classifiers consisting of a single strong classifier e 1 、e 2 ...e n The image classification method comprises the steps that all classifiers are used for identifying an image to be detected as a face image, in the cascade classifier, the number of the front classifier features is small, the accuracy rate is lower when the identification time is shorter, the image is identified in a short time, the number of the rear classifier features is large, the identification time is long, the accuracy rate is high, and the screened image is more accurately identifiedAnd (5) quasi identification.
8. The method for detecting the placement position of a mobile phone according to claim 1, wherein acquiring the three-axis acceleration built in the smart phone to acquire the tilt angle of the mobile phone used in the mobile phone comprises:
converting RGB (R, G, B) of any point in an image into RGB (Gray, gray, gray) by adopting a shifting method, and converting the RGB (R, G, B) into RGB (Gray, gray, gray) by an expression of Gray = (R x 19595+ G x 38469+ B x 7472) > >16, wherein R, G and B respectively represent red, green and blue, and Gray represents a Gray value after conversion;
and performing signal transformation of image illumination intensity on the obtained original image, wherein the image intensity correction comprises one or more of histogram equalization, gamma transformation, local contrast enhancement and homomorphic filtering methods.
9. The method as claimed in claim 8, wherein the image quality is improved by selectively stretching a gray scale region in the image if two points A (x) exist 5 ,y 5 ) And B (x) 6 ,y 6 ) Converting the original image x by a conversion function 5 To x 6 Gray scale stretching between to y 5 To y 6 When the slope of the connecting line of the two points is larger than 1, the image is dark, and the gray level interval is stretched through gray level stretching to improve the image quality; when the slope of the connecting line of the two points is smaller than 1, the image is bright and the image quality is improved through gray level compression.
10. A placement position detection device for a mobile phone according to the placement position detection method of any one of claims 1 to 9, comprising:
the mobile phone comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a mobile phone inclination angle of a built-in three-axis acceleration acquisition using mobile phone of the smart phone, the mobile phone inclination angle is extracted from a three-axis acceleration sensor, and a front camera of the smart phone captures a human face image;
the second acquisition module is used for acquiring a face angle of a face from the face image and obtaining a neck bending angle according to the inclination angle of the mobile phone and the face angle, wherein the face angle is obtained by preprocessing the face image, extracting features, positioning pupils and positioning mouths;
the computing module is used for acquiring gyroscope data built in the mobile phone, computing the gyroscope data by combining data acquired by the triaxial acceleration sensor to obtain a rotation radius component, an angular velocity amplitude and an attitude angle of the mobile phone, and extracting characteristics and classifying the rotation radius component, the angular velocity amplitude and the attitude angle to obtain a placement position of the mobile phone;
and the processing module is used for carrying out early warning on bad behaviors of classifying the head-lowering postures based on the placing position of the mobile phone and the neck bending angle so as to determine the head-lowering angle.
CN202210816433.9A 2022-07-12 2022-07-12 Method and device for detecting placement position of mobile phone Pending CN115240385A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210816433.9A CN115240385A (en) 2022-07-12 2022-07-12 Method and device for detecting placement position of mobile phone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210816433.9A CN115240385A (en) 2022-07-12 2022-07-12 Method and device for detecting placement position of mobile phone

Publications (1)

Publication Number Publication Date
CN115240385A true CN115240385A (en) 2022-10-25

Family

ID=83672749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210816433.9A Pending CN115240385A (en) 2022-07-12 2022-07-12 Method and device for detecting placement position of mobile phone

Country Status (1)

Country Link
CN (1) CN115240385A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103927250A (en) * 2014-04-16 2014-07-16 北京尚德智产投资管理有限公司 User posture detecting method achieved through terminal device
CN104424771A (en) * 2013-09-10 2015-03-18 中兴通讯股份有限公司 Warning method and device
CN106933358A (en) * 2017-02-28 2017-07-07 宇龙计算机通信科技(深圳)有限公司 A kind of cervical vertebra guard method, device and intelligent helmet
CN107273823A (en) * 2017-05-26 2017-10-20 西安理工大学 A kind of neck attitude monitoring method merged based on sensor with image procossing
TWM605503U (en) * 2020-06-11 2020-12-21 啟英學校財團法人桃園市啟英高級中等學校 Spinal cord cowection
CN112527094A (en) * 2019-09-18 2021-03-19 华为技术有限公司 Human body posture detection method and electronic equipment
TW202113553A (en) * 2019-09-25 2021-04-01 崑山科技大學 Warning method for bad posture when using mobile device
TWM629292U (en) * 2021-10-05 2022-07-11 仁德醫護管理專科學校 Device for correction neck posture

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424771A (en) * 2013-09-10 2015-03-18 中兴通讯股份有限公司 Warning method and device
CN103927250A (en) * 2014-04-16 2014-07-16 北京尚德智产投资管理有限公司 User posture detecting method achieved through terminal device
CN106933358A (en) * 2017-02-28 2017-07-07 宇龙计算机通信科技(深圳)有限公司 A kind of cervical vertebra guard method, device and intelligent helmet
CN107273823A (en) * 2017-05-26 2017-10-20 西安理工大学 A kind of neck attitude monitoring method merged based on sensor with image procossing
CN112527094A (en) * 2019-09-18 2021-03-19 华为技术有限公司 Human body posture detection method and electronic equipment
TW202113553A (en) * 2019-09-25 2021-04-01 崑山科技大學 Warning method for bad posture when using mobile device
TWM605503U (en) * 2020-06-11 2020-12-21 啟英學校財團法人桃園市啟英高級中等學校 Spinal cord cowection
TWM629292U (en) * 2021-10-05 2022-07-11 仁德醫護管理專科學校 Device for correction neck posture

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
任磊等: "移动设备佩戴位置自适应识别的跌倒检测方法", 计算机工程与应用, vol. 54, no. 21, 1 November 2018 (2018-11-01), pages 7 - 12 *
柯若勐: "基于智能手机的不良行为识别方法研究", 中国优秀硕士学位论文全文数据库 信息科技辑, no. 1, 15 January 2022 (2022-01-15), pages 6 - 7 *

Similar Documents

Publication Publication Date Title
US9672416B2 (en) Facial expression tracking
JP6398979B2 (en) Video processing apparatus, video processing method, and video processing program
CN106056064B (en) A kind of face identification method and face identification device
KR100343223B1 (en) Apparatus for eye and face detection and method thereof
CN101390128B (en) Detecting method and detecting system for positions of face parts
CN105608479B (en) In conjunction with the anomaly detection method and system of depth data
CN106980852B (en) Based on Corner Detection and the medicine identifying system matched and its recognition methods
US20130271584A1 (en) User wearable visual assistance device
CN109598242B (en) Living body detection method
CN107798279B (en) Face living body detection method and device
CN103902958A (en) Method for face recognition
CN106846734A (en) A kind of fatigue driving detection device and method
CN108537131B (en) Face recognition living body detection method based on face characteristic points and optical flow field
CN105139404A (en) Identification camera capable of detecting photographing quality and photographing quality detecting method
CN109271918B (en) Method for distinguishing people with balance ability disorder based on gravity center shift model
CN109359577A (en) A kind of Complex Background number detection system based on machine learning
WO2005055143A1 (en) Person head top detection method, head top detection system, and head top detection program
CN111616718A (en) Method and system for detecting fatigue state of driver based on attitude characteristics
JP2021103347A (en) Information processing device, information processing method and program
JP2020140630A (en) Fixation point estimation system, fixation point estimation method, fixation point estimation program, and information recording medium for recording the same
CN111738178A (en) Wearing mask facial expression recognition method based on deep learning
CN110366388B (en) Information processing method, information processing apparatus, and computer-readable storage medium
KR101343623B1 (en) adaptive color detection method, face detection method and apparatus
CN111310689B (en) Method for recognizing human body behaviors in potential information fusion home security system
CN113221812A (en) Training method of face key point detection model and face key point detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination