CN111967327A - Reading state identification method and device, computer equipment and readable storage medium - Google Patents

Reading state identification method and device, computer equipment and readable storage medium Download PDF

Info

Publication number
CN111967327A
CN111967327A CN202010687691.2A CN202010687691A CN111967327A CN 111967327 A CN111967327 A CN 111967327A CN 202010687691 A CN202010687691 A CN 202010687691A CN 111967327 A CN111967327 A CN 111967327A
Authority
CN
China
Prior art keywords
image
reading
user
similarity
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010687691.2A
Other languages
Chinese (zh)
Other versions
CN111967327B (en
Inventor
田源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Waterward Information Co Ltd
Original Assignee
Shenzhen Water World Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Water World Co Ltd filed Critical Shenzhen Water World Co Ltd
Priority to CN202010687691.2A priority Critical patent/CN111967327B/en
Publication of CN111967327A publication Critical patent/CN111967327A/en
Application granted granted Critical
Publication of CN111967327B publication Critical patent/CN111967327B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The application provides a reading state identification method and device, computer equipment and a readable storage medium, wherein a reading robot monitors whether page turning action occurs within preset time; if the page turning action does not occur within the preset time, acquiring the current user posture, and then judging whether the user is in a non-reading state currently or not according to the user posture; and if the user is in a non-reading state, executing a preset action. The reading robot can effectively identify the reading state of a user by monitoring the page turning action and the posture of the user in the reading process of the user; and when the user is identified to be in a non-reading state, the user is reminded by executing a preset action, so that the reading quality of the user is ensured.

Description

Reading state identification method and device, computer equipment and readable storage medium
Technical Field
The present application relates to the field of intelligent robot technology, and in particular, to a reading state identification method, apparatus, computer device, and readable storage medium.
Background
Books are a step of human progress, and reading is an indispensable important role as a learning mode in the process of growth. The existing reading robot can automatically read and play book contents, and accordingly accompanying users can read the book contents. However, during reading, the reading robot cannot recognize the reading state of the user, and the user may doze off, which may affect the reading quality.
Disclosure of Invention
The application mainly aims to provide a reading state identification method, a reading state identification device, computer equipment and a readable storage medium, and aims to overcome the defect that the reading quality is influenced because the reading state of a user cannot be identified by an existing reading robot.
In order to achieve the above object, the present application provides a reading state identification method, including:
monitoring whether page turning action occurs within preset time;
if the page turning action does not occur within the preset time, acquiring the current user posture;
judging whether the user is in a non-reading state at present or not according to the user posture;
and if the user is in a non-reading state, executing a preset action.
Further, the step of monitoring whether a page turning action occurs within a preset time includes:
acquiring book contents through a first camera;
judging whether the book content is changed within the preset time;
if the book content is changed within the preset time, judging that page turning motion is monitored within the preset time;
and if the book content is not changed within the preset time, judging that the page turning action is not monitored within the preset time.
Further, the step of determining whether the user is currently in a non-reading state according to the user gesture includes:
acquiring the spacing distance between the limbs of the user and the book placing platform;
judging whether the spacing distance is within a preset distance range;
if the spacing distance is not within the preset distance range, acquiring a user image;
judging whether the first similarity between the user image and the non-reading image is not less than a first threshold value or not;
if the first similarity between the user image and the non-reading image is not smaller than a first threshold value, judging that the user is in a non-reading state currently;
and if the first similarity between the user image and the non-reading image is smaller than a first threshold value, judging that the user is not in a non-reading state currently.
Further, the step of determining whether the first similarity between the user image and the non-reading image is not less than a first threshold includes:
screening an initial non-reading image with a second similarity to the first environment image not smaller than a second threshold value from the preset image library to serve as the non-reading image, wherein the non-reading image comprises a second posture image;
carrying out similarity matching on the first attitude image and each second attitude image, and calculating according to a preset rule to obtain a similarity value;
judging whether the similarity value is not smaller than the first threshold value;
if the similarity value is not smaller than the first threshold value, judging that the first similarity between the user image and the non-reading image is not smaller than the first threshold value;
and if the similarity value is smaller than the first threshold value, judging that the first similarity between the user image and the non-reading image is smaller than the first threshold value.
Further, the step of screening out, from the preset gallery, an initial non-read image having a second similarity to the first environment image that is not less than a second threshold as the non-read image includes:
comparing the article outline and the article layout of the first environment image with the article outline and the article layout of each second environment image, and calculating to obtain the second similarity corresponding to each initial non-reading image;
and comparing the second similarity corresponding to each initial non-reading image with the second threshold value, and screening to obtain the initial non-reading image with the second similarity not less than the second threshold value as the non-reading image.
Further, the step of performing similarity matching between the first posture image and each of the second posture images and calculating a similarity value according to a preset rule includes:
according to the outer contour of the attitude image, respectively carrying out similarity matching on the first attitude image and each second attitude image to obtain initial similarity values respectively corresponding to the second attitude image and the first attitude image;
and calculating the average value of the initial similarity values to obtain the similarity value.
Further, after the step of determining whether the user is currently in a non-reading state according to the user gesture if the page turning action does not occur within the preset time, the method includes:
if the user is in a non-reading state, storing the image corresponding to the user posture in a sleep image library, wherein the sleep image library is associated with the account number of the user;
and if the user is not in a non-reading state at present, storing the image corresponding to the user posture in a reading gallery, wherein the reading gallery is associated with the account number of the user.
The application also provides a reading state recognition device, including:
the monitoring module is used for monitoring whether page turning action occurs within preset time;
the acquisition module is used for acquiring the current user posture if no page turning action occurs within the preset time;
the judging module is used for judging whether the user is in a non-reading state currently or not according to the user posture;
and the execution module is used for executing the preset action if the user is in a non-reading state currently.
Further, the monitoring module includes:
the first obtaining submodule is used for obtaining book contents through the first camera;
the first judgment submodule is used for judging whether the book content is changed within the preset time;
the first judgment submodule is used for judging that page turning motion is monitored to occur within the preset time if the book content is changed within the preset time;
and the second judging submodule is used for judging that the page turning action is not monitored in the preset time if the book content is not changed in the preset time.
Further, the determining module includes:
the second acquisition submodule is used for acquiring the spacing distance between the user limb and the book placing platform;
the second judgment submodule is used for judging whether the spacing distance is within a preset distance range;
the third obtaining sub-module is used for obtaining the user image if the spacing distance is not within the preset distance range;
the third judgment submodule is used for judging whether the first similarity between the user image and the non-reading image is not smaller than a first threshold value or not;
a third judging submodule, configured to judge that the user is currently in a non-reading state if the first similarity between the user image and the non-reading image is not smaller than a first threshold;
and the fourth judging submodule is used for judging that the user is not in a non-reading state currently if the first similarity between the user image and the non-reading image is smaller than a first threshold value.
Further, the user image includes a first environment image and a first pose image, the non-read image is included in a preset gallery, the preset gallery includes a plurality of initial non-read images, and the third determining sub-module includes:
the screening unit is used for screening an initial non-reading image with a second similarity degree with the first environment image not smaller than a second threshold value from the preset image library to serve as the non-reading image, and the non-reading image comprises a second posture image;
the matching unit is used for carrying out similarity matching on the first attitude image and each second attitude image and calculating according to a preset rule to obtain a similarity value;
a judging unit configured to judge whether the similarity value is not less than the first threshold;
a first determination unit, configured to determine that a first similarity between the user image and a non-reading image is not less than a first threshold if the similarity value is not less than the first threshold;
and the second judging unit is used for judging that the first similarity between the user image and the non-reading image is smaller than a first threshold value if the similarity value is smaller than the first threshold value.
Further, the initial non-reading image includes a second environment image, and the filtering unit includes:
the first calculating subunit is configured to compare the article contour and the article layout of the first environment image with the article contour and the article layout of each second environment image, and calculate to obtain the second similarity corresponding to each initial non-read image;
and the screening subunit is configured to compare the second similarity corresponding to each of the initial non-read images with the second threshold, and screen the initial non-read images with the second similarity not smaller than the second threshold to obtain the non-read images.
Further, the matching unit includes:
the matching subunit is configured to perform similarity matching on the first posture image and each of the second posture images according to an outer contour of the posture image, so as to obtain initial similarity values corresponding to the second posture image and the first posture image;
and the second calculating subunit is used for calculating the average value of the initial similarity values to obtain the similarity value.
Further, the prompting device further includes:
the first storage module is used for storing the image corresponding to the user posture in a sleep gallery if the user is in a non-reading state currently, and the sleep gallery is associated with an account number of the user;
and the second storage module is used for storing the image corresponding to the user posture in a reading gallery if the user is not in a non-reading state currently, and the reading gallery is associated with the account number of the user.
The present application further provides a computer device comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of any one of the above methods when executing the computer program.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of any of the above.
According to the reading state identification method, the reading state identification device, the computer equipment and the readable storage medium, a reading robot monitors whether page turning action occurs within preset time; if the page turning action does not occur within the preset time, acquiring the current user posture, and then judging whether the user is in a non-reading state currently or not according to the user posture; and if the user is in a non-reading state, executing a preset action. The reading robot can effectively identify the reading state of a user by monitoring the page turning action and the posture of the user in the reading process of the user; and when the user is identified to be in a non-reading state, the user is reminded by executing a preset action, so that the reading quality of the user is ensured.
Drawings
FIG. 1 is a diagram illustrating steps of a reading status identification method according to an embodiment of the present application;
fig. 2 is a block diagram illustrating an overall structure of a reading state recognition apparatus according to an embodiment of the present application;
fig. 3 is a block diagram schematically illustrating a structure of a computer device according to an embodiment of the present application.
The implementation, functional features and advantages of the objectives of the present application will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
Referring to fig. 1, an embodiment of the present application provides a reading status identification method, including:
s1, monitoring whether page turning action occurs within a preset time;
s2, if no page turning action occurs within the preset time, acquiring the current user gesture;
s3, judging whether the user is in a non-reading state at present or not according to the user posture;
s4, if the user is not reading, executing the preset action.
In the reading state identification method provided by this embodiment, the reading robot monitors whether a page turning action occurs within a set time; if the page turning action does not occur within the preset time, acquiring the current user posture, and then judging whether the user is in a non-reading state currently or not according to the user posture; and if the user is in a non-reading state, executing a preset action. The reading robot can effectively identify the reading state of a user by monitoring the page turning action and the posture of the user in the reading process of the user; and when the user is identified to be in a non-reading state, the user is reminded by executing a preset action, so that the reading quality of the user is ensured.
In this embodiment, the reading robot is provided with two cameras, namely a first camera and a second camera. The first camera is used for monitoring whether a page turning action occurs to a book which is read by a user currently, specifically, the reading robot acquires book content, namely text information on pages through the first camera, and plays the text information through the audio player, so that an automatic reading function is realized. In a normal reading state, the reading time spent on each page is within a time range (for example, one page of book is read in 3-5 minutes), and after the reading is finished, the pages are turned manually. The reading robot monitors the page turning action by judging whether the book content is changed within the preset time. If the book content of the current page changes within a preset time period, the reading robot judges that the page turning action is monitored within the preset time period, and the user is currently in a reading state. And if the book content of the current page is not changed within the preset time period, the reading robot judges that the page turning action is not monitored within the preset time period. At this time, the user may be dozing off, or the user thinks because the content of the book on the current page is deep, so that the reading robot needs to perform the next recognition, and the user posture can be correspondingly judged by starting the second camera.
The first camera and the second camera are not required to be opened simultaneously, the reading robot firstly opens the first camera to acquire book content and monitors whether the book turns over pages, and when the situation that the book does not turn over pages within the preset time is monitored, the second camera is opened to acquire user images. The first camera and the second camera are sequentially started, so that the electric quantity of the reading robot can be effectively saved. In addition, under the condition that the reading robot is not necessary (for example, the first camera monitors that the book turns pages within the preset time), the reading robot only starts the first camera, so that the reading robot only needs to process the image data collected by the first camera, the processing burden of an internal system of the reading robot on the data can be effectively reduced, and the processing speed of the image data is improved.
In this embodiment, the above "non-reading state" may be a sleeping state, a lying state, or the like, and is not exhaustive here.
Specifically, the reading robot first preferably obtains a separation distance between a user limb and a book placing platform (for example, an edge of a table on which books are placed) through a distance detector (for example, an infrared detector), and then compares the separation distance with a preset distance range to determine whether the separation distance is within the preset distance range. The preset distance range is the distance between the user sitting and the book placing platform, when the user sleeps, the body part or the cervical vertebra of the user can be excessively inclined forwards or backwards, and the distance between the user body and the book placing platform inevitably exceeds the preset distance range. The specific value of the preset distance range is set by a designer according to different environments and is input into the reading robot for storage. If the spacing distance is within the preset distance range, the limb posture of the user is sitting up and is in a reading state. If the spacing distance is not within the preset range, the situation shows that the user is probably in a non-reading state currently. In order to further increase the recognition accuracy, the reading robot acquires the current user image through the second camera, then matches the user image with the non-reading image, and calculates the first similarity between the user image and the non-reading image. The non-reading images are contained in a preset map library, the preset map library contains a plurality of initial non-reading images, and each initial non-reading image is a user posture image which is collected by a designer in advance and is in a non-reading state such as vague, lazy or dozing. The non-reading image is an image which is obtained by screening the plurality of initial non-reading images and is closest to the current environment of the user according to the current environment of the user by the reading robot. In this embodiment, the reading robot performs preprocessing such as image scaling and gray level conversion on the collected user image, and performs various optimizations by using techniques such as a median filtering method, so that distortion of the image is minimized. And then carrying out similarity matching on the first posture image of the corresponding user in the user image and the second posture image of the corresponding person in the non-reading image, thereby calculating the first similarity between the user image and the non-reading image, and comparing the first similarity with a first threshold value. And if the similarity between the user image and the non-reading image is smaller than a first threshold value, the reading robot judges that the user is not in a non-reading state currently. And if the similarity between the user image and the non-reading image is not less than a first threshold value, the reading robot judges that the user is in a non-reading state currently. In this embodiment, the reading robot successively adopts the mode of combining the distance detector and the second camera to acquire and analyze the current user posture, and the distance detector and the second camera are combined layer by layer and combined with each other, so that the accuracy of recognizing the specific reading state of the user by the reading robot through the user posture is effectively improved. After the reading robot determines that the user is in the non-reading state, the reading robot performs a preset action, such as playing a sound through an audio player, to prompt the user to be awake. Preferably, a plurality of preset time periods are arranged in the reading robot, and different time periods correspond to different types of prompting actions. When recognizing that the user is in a non-reading state, the reading robot can acquire the current local time through a timing function or networking carried by the reading robot, then match the local time with each preset time period, recognize the preset time period of the current local time, and execute a preset action corresponding to the preset time period matched at the current time. The database in the reading robot carries a mapping relation table between a preset time period and a preset action. For example, the reading robot is internally provided with: the preset time period A: 7: 00-23: 00, corresponding to a preset action type A; a preset time period B: 23: 00-7: 00, corresponding to the preset action type B. Assuming that the current local time is 20:00, in the preset time period a: 7: 00-23: 00, the reading robot executes a type A preset action, such as playing a sound through an audio player to prompt the user to be awake and prompt the user to continue reading. Assuming that the current local time is 23:30, in the preset time period B: 23: 00-7: 00, the reading robot can execute B-type preset actions, for example, the reading robot can play sound through an audio player to prompt a user to be awake and prompt the user to have a rest as soon as possible, so that the influence on work or learning in the next day is avoided, and even the reading robot can be automatically shut down after prompting. The preset time periods corresponding to the different types of preset actions can be set by the user according to the living habits of the user (for example, the user can set 7: 00-23: 00 to 7: 00-22: 00), and if the user is not self-defined, the default is factory setting.
Further, the step of monitoring whether a page turning action occurs within a preset time includes:
s101, acquiring book contents through a first camera;
s102, judging whether the book content is changed within the preset time;
s103, if the book content is changed within the preset time, judging that page turning motion is monitored within the preset time;
and S104, if the book content is not changed within the preset time, judging that the page turning action is not monitored within the preset time.
In this embodiment, a first camera is disposed on the reading robot, and the reading robot may scan a book through the first camera, so as to obtain contents of the book (i.e., text information on pages). The reading robot can read the book content according to a certain speed of speech (the speed of speech can be factory default setting, and can also be set by a user independently), and the function of automatic reading is realized. In general, if a user is in a reading state, after a reading robot reads the book contents of the current page for a certain period of time, pages are turned manually, so that the reading robot obtains new book contents. Therefore, the reading robot can judge whether the page turning action of the user occurs or not by identifying whether the book content is changed within the preset time period or not. If the reading robot can acquire the book content different from the current page through the first camera within the preset time period, the reading robot can judge that the user is monitored to have a page turning action within the preset time period, and the user is currently in a reading state. If the reading robot finishes reading or defaults to finish reading the book content of the current page within a period of time (such as the digestion time of the reading content, the thinking time and the like) (the preset time is the sum of the reading time and the period of time), the book content is not changed (at the moment, the reading robot does not execute the reading action), and the situation that the user does not monitor the action within the preset time is judged.
Preferably, the reading robot can acquire the page number of the page through the first camera, and the reading robot can judge whether the page turning action of the user occurs by judging whether the page number changes within a preset time period. Because the page number of the book has the marking property, the reading robot identifies whether the content of the book is changed or not through the page number, the accuracy is high, and the information processing difficulty is low.
Further, the step of judging whether the user is currently in a non-reading state through the user posture includes:
s301, acquiring the spacing distance between the user limb and the book placing platform;
s302, judging whether the spacing distance is within a preset distance range;
s303, if the spacing distance is not within a preset distance range, acquiring a user image;
s304, judging whether the first similarity between the user image and the non-reading image is not less than a first threshold value;
s305, if the first similarity between the user image and the non-reading image is not less than a first threshold value, judging that the user is in a non-reading state currently;
s306, if the first similarity between the user image and the non-reading image is smaller than a first threshold value, judging that the user is not in a non-reading state currently.
In this embodiment, a distance detector (for example, an infrared detector) is disposed on the reading robot, and the reading robot obtains a distance between a current user limb and a book placing platform (for example, a desk) through the distance detector. The limbs of the user are preferably the trunk area or the neck area of the user, and the area of the trunk area is large, so that the distance detector can conveniently detect the distance; although the area of the neck area is small, when the reading posture of the user changes, the change of the distance between the neck area and the book placing platform is more obvious (for example, when the user sleeps and leans backwards, the change of the distance between the neck area of the user and the book placing platform is obviously larger than the change of the distance between the trunk area and the book placing platform). The distance detector detects the distance between the limbs in different areas of the user, and the corresponding preset distance ranges are different (the specific corresponding relationship is set by the designer, and is not described in detail here). And after the reading robot detects the spacing distance, matching the corresponding preset distance range according to the currently detected limb area of the user. Then, the spacing distance is compared with a preset distance range, and whether the spacing distance is within the preset distance range is judged. And if the spacing distance is within the preset distance range, indicating that the user is in a reading state currently. And if the spacing distance is not within the preset distance range, the reading robot acquires the user image through the second camera. The user image comprises a first environment image and a first posture image, and the reading robot obtains a non-reading image with the same environment type as the first environment image through screening from a preset image library according to the first environment image. The non-reading image comprises a second posture image, and the reading robot carries out similarity matching on the first posture image and the second posture image to obtain a first similarity between the user image and the non-reading image. Then, comparing the first similarity between the user image and the non-reading image with a preset first threshold value, and judging the size relationship between the user image and the non-reading image. The specific value of the first threshold is set by a designer according to different environments, and is not specifically limited herein. If the first similarity between the user image and the non-reading image is not smaller than the first threshold, the current posture of the user is a sleeping posture, and the reading robot judges that the user is in a non-reading state. And if the first similarity between the user image and the non-reading image is smaller than a first threshold value, judging that the user is not in the non-reading state currently.
Further, the step of determining whether the first similarity between the user image and the non-reading image is not less than a first threshold includes:
s3041, screening an initial non-reading image with a second similarity not smaller than a second threshold value with the first environment image from the preset image library, and taking the initial non-reading image as the non-reading image, wherein the non-reading image comprises a second posture image;
s3042, carrying out similarity matching on the first posture image and each second posture image, and calculating according to a preset rule to obtain a similarity value;
s3043, judging whether the similarity value is not less than the first threshold value;
s3044, if the similarity value is not less than the first threshold, determining that the first similarity between the user image and the non-reading image is not less than the first threshold;
s3045, if the similarity value is smaller than the first threshold, determining that the first similarity between the user image and the non-reading image is smaller than the first threshold.
In this embodiment, the user image includes a first environment image and a first posture image, the first environment image is an image of shapes and layouts of various furniture and articles in the user image, and the first posture image is a shape and a state image of a current limb of the user in the user image. The non-reading image is contained in a preset map library, the preset map library comprises a plurality of initial non-reading images, and each initial non-reading image is a user posture image which is collected by a designer in advance and is in a non-reading state or doze. The initial non-reading image and the non-reading image also comprise a second environment image representing the shapes and the layouts of various furniture and articles and a second posture image representing the shapes and the states of the limbs of the person. The reading robot firstly compares the first environment image with the second environment image in each initial non-reading image one by one, and performs similarity matching according to the outer contour shape of the articles in the environment images and the layout positions of the articles (such as the spacing between a table and a chair, the height and other layout positions), so as to obtain the second similarity corresponding to each other between the first environment image and each second environment image. And then comparing each second similarity with a second threshold value, and screening a plurality of initial non-reading images not lower than the second threshold value as the non-reading images of the comparison. The second threshold is set by a designer according to different environments, and is not described in detail herein. The reading robot can screen non-reading images which are close to the current environment of the user from a preset image library through similarity comparison of the environment images, influences of different factors such as the environment and the like are avoided to the maximum extent (for example, different distances between a chair and a desk enable the user to have different sitting postures; and chairs of different styles such as a soft sofa and a hard wooden chair enable the user to have different sitting postures), and therefore accuracy of comparison of the posture images in the next step is improved. And the reading robot carries out similarity matching on the first posture image and each second posture image, and finally calculates to obtain a similarity value according to a preset rule. Specifically, the reading robot first performs similarity matching between the first posture image and second posture images of a plurality of non-reading images obtained by screening at the current time, and calculates initial similarity values respectively corresponding to the second posture images and the first posture image according to similarity of human outer contours in the images (for example, an initial similarity value corresponding to the second posture image a and the first posture image is a, and an initial similarity value corresponding to the second posture image B and the first posture image is a). Then, the reading robot may perform arithmetic mean calculation on each initial similarity value to obtain a mean value of each initial similarity value, and the mean value is used as the current similarity value. Or, the reading robot may sort each initial similarity value in descending order according to the magnitude of the value, and select an initial similarity value that is ranked most forward (i.e., the value of the initial similarity is the largest) as the current similarity value. And if the similarity value is not less than the first threshold value, the first similarity between the user image and the non-reading image is judged to be not less than the first threshold value. If the similarity value is smaller than the first threshold value, the first similarity between the user image and the non-reading image is judged to be smaller than the first threshold value.
Further, the step of screening out, from the preset gallery, an initial non-read image having a second similarity to the first environment image that is not less than a second threshold as the non-read image includes:
s30411, comparing the article outline and the article layout of the first environment image with the article outline and the article layout of each second environment image, and calculating to obtain the second similarity corresponding to each initial non-reading image;
s30412, comparing the second similarity corresponding to each initial non-reading image with the second threshold value, and screening to obtain the initial non-reading image with the second similarity not less than the second threshold value as the non-reading image.
In this embodiment, the initial non-reading image includes a second environment image, and the reading robot compares the article contour and the article layout of the first environment image with the article contour and the article layout of each second environment image, so as to calculate a second similarity corresponding to each of the second environment image and each initial non-reading image. Then, the reading robot compares the second similarity corresponding to each initial non-reading image with a second threshold value, so as to obtain the initial non-reading image with the second similarity not less than the second threshold value as the non-reading image through screening.
Preferably, there may be a plurality of non-reading images obtained by the current screening, the reading robot may calculate a first ratio between the outer contour size of the person image in the user image and the outer contour size of the seat image (for convenience of calculation, the reading robot may extract the height of the outer contour of the image for comparison, so as to calculate the first ratio, and a second ratio described below may also be simply calculated by using the method), and then calculate a second ratio between the outer contour size of the person image in each non-reading image and the outer contour size of the seat image. And the reading robot compares each second proportion with the first proportion, so as to screen a non-reading image corresponding to the second proportion closest to the first proportion as the current first similarity calculation. In this embodiment, the non-read image corresponding to the second ratio closest to the first ratio is screened to perform the first similarity calculation, so that when the similarity matching between the user image and the non-read image is performed to the greatest extent, interference of other factors except the person image between the user image and the non-read image (for example, when the user sits on a seat with the same size and reads, the body shape of the user is affected by different heights), and the accuracy is further improved.
Further, the step of performing similarity matching between the first posture image and each of the second posture images and calculating a similarity value according to a preset rule includes:
s30421, respectively carrying out similarity matching on the first posture image and each second posture image according to the outline of the posture image to obtain initial similarity values respectively corresponding to the second posture image and the first posture image;
s30422, calculating the average value of the initial similarity values to obtain the similarity value.
In this embodiment, the reading robot respectively performs similarity matching on the outer contour of the first posture image and the outer contour of the second posture image, and calculates initial similarity values respectively corresponding to the second posture image and the first posture image according to the degree of coincidence of the outer contours. Preferably, before similarity matching, the reading robot zooms the first pose image to be in equal proportion to the second pose image, so that the coincidence degree between the outer contours of the first pose image and the outer contours of the second pose image is compared conveniently. Since there are a plurality of second pose images, there are a plurality of initial similarity values correspondingly calculated. And the reading robot adds the initial similarity values to obtain a sum, and then performs average calculation on the sum according to the number of the second posture images so as to obtain the similarity value.
Further, after the step of determining whether the user is currently in a non-reading state according to the user gesture if the page turning action does not occur within the preset time, the method includes:
s5, if the user is not reading, storing the image corresponding to the user gesture in a sleep gallery, wherein the sleep gallery is associated with the account of the user;
and S6, if the user is not in a non-reading state, storing the image corresponding to the user posture in a reading gallery, wherein the reading gallery is associated with the account of the user.
In this embodiment, when using the reading robot, the user can log in the reading robot through the account, and implement the relevant personal setting (such as the rest time period setting mentioned above) for the reading robot. The reading robot is internally constructed with a sleep gallery and a reading gallery which are associated with a user account, the sleep gallery is used for storing images of the sleep posture of the user, and the reading gallery is used for storing images of the reading posture of the user. The reading robot judges whether the user is in a non-reading state currently or not according to the user posture, and stores an image corresponding to the current user posture (namely, the user image acquired at the current time) in the sleep image library if the user is in the non-reading state currently. And if the user is not in the non-reading state currently, storing the image corresponding to the current user posture (namely the user image acquired at the current time) in the reading gallery. When the reading robot identifies whether the user is in a non-reading state or not according to the image next time, the reading robot preferentially selects the images in the sleep gallery and the reading gallery which are associated with the user account as a comparison basis. Because the images in the sleep gallery and the reading gallery associated with the user account are acquired through the images of the user, the accuracy is higher when the reading posture of the user is identified.
Referring to fig. 2, an embodiment of the present application further provides a reading state identification apparatus, including:
the monitoring module 1 is used for monitoring whether page turning action occurs within preset time;
the acquisition module 2 is used for acquiring the current user posture if no page turning action occurs within the preset time;
the judging module 3 is used for judging whether the user is in a non-reading state currently or not according to the user posture;
and the execution module 4 is used for executing the preset action if the user is in a non-reading state currently.
In this embodiment, the prompting device is provided with two cameras, which are a first camera and a second camera respectively. The first camera is used for monitoring whether page turning action occurs to a book which is read by a user currently, specifically, the prompting device acquires book content, namely text information on pages through the first camera, and plays the text information through the audio player, so that an automatic reading function is realized. In a normal reading state, the reading time spent on each page is within a time range (for example, one page of book is read in 3-5 minutes), and after the reading is finished, the pages are turned manually. The prompting device realizes the monitoring of the page turning action by judging whether the book content is changed within the preset time. If the book content of the current page changes within a preset time period, the prompting device judges that the page turning action is monitored within the preset time period, and the user is currently in a reading state. If the book content of the current page is not changed within the preset time period, the prompting device judges that the page turning action is not monitored within the preset time period. At this time, the user may be dozing off, or the user may think about the current page because the content of the book is deep, so that the prompting device needs to recognize the next step, and the user posture can be determined by turning on the second camera.
The prompting device is used for starting the first camera to acquire book content and monitoring whether the book turns over pages or not, and then starting the second camera to acquire user images when the situation that the book does not turn over pages within the preset time is monitored. The first camera and the second camera are sequentially started, so that the electric quantity of the reading robot can be effectively saved. In addition, under the condition that the operation is not necessary (for example, the first camera monitors that the page turning action of the book occurs within the preset time), the prompting device only starts the first camera, so that the prompting device only needs to process the image data collected by the first camera, the processing load of an internal system of the prompting device on the data can be effectively reduced, and the processing speed of the image data is improved.
In this embodiment, the above "non-reading state" may be a sleeping state, a lying state, or the like, and is not exhaustive here.
Specifically, the prompting device first preferably obtains a separation distance between the limb of the user and a book placing platform (for example, an edge of a table on which books are placed) through a distance detector (for example, an infrared detector), and then compares the separation distance with a preset distance range to determine whether the separation distance is within the preset distance range. The preset distance range is the distance between the user sitting and the book placing platform, when the user sleeps, the body part or the cervical vertebra of the user can be excessively inclined forwards or backwards, and the distance between the user body and the book placing platform inevitably exceeds the preset distance range. The specific value of the preset distance range is set by a designer according to different environments and is input into the prompting device for storage. If the spacing distance is within the preset distance range, the limb posture of the user is sitting up and is in a reading state. If the spacing distance is not within the preset range, the situation shows that the user is probably in a non-reading state currently. In order to further increase the recognition accuracy, the prompting device acquires the current user image through the second camera, then matches the user image with the non-reading image, and calculates the first similarity between the user image and the non-reading image. The non-reading images are contained in a preset map library, the preset map library contains a plurality of initial non-reading images, and each initial non-reading image is a user posture image which is collected by a designer in advance and is in a non-reading state such as vague, lazy or dozing. The non-reading image is an image which is obtained by screening the plurality of initial non-reading images according to the current environment of the user and is closest to the current environment of the user by the prompting device. In this embodiment, the prompting device performs preprocessing such as image equal-scale scaling and gray-scale conversion on the collected user image, and performs various optimizations by using techniques such as a median filtering method, so that distortion of the image is minimized. And then carrying out similarity matching on the first posture image of the corresponding user in the user image and the second posture image of the corresponding person in the non-reading image, thereby calculating the first similarity between the user image and the non-reading image, and comparing the first similarity with a first threshold value. If the similarity between the user image and the non-reading image is less than a first threshold value, the prompting device judges that the user is not in the non-reading state currently. If the similarity between the user image and the non-reading image is not less than the first threshold value, the prompting device judges that the user is in the non-reading state. In the embodiment, the reading robot collects and analyzes the current user posture in a mode of combining the distance detector and the second camera, and the distance detector and the second camera are combined layer by layer and combined with each other, so that the accuracy of recognizing the specific reading state of the user through the user posture by the reading robot is effectively improved. After determining that the user is in the non-reading state, the prompting device performs a preset action, such as playing a sound through an audio player, to prompt the user to be awake. Preferably, a plurality of preset time periods are arranged in the prompting device, and different time periods correspond to different types of prompting actions. When the prompting device identifies that the user is in a non-reading state, the current local time can be acquired through a timing function or networking carried by the prompting device, then the local time is matched with each preset time period, the preset time period in which the current local time is located is identified, and a preset action corresponding to the preset time period matched with the current time is executed. The database in the prompting device carries a mapping relation table between the preset time period and the preset action. For example, the prompting device is internally provided with: the preset time period A: 7: 00-23: 00, corresponding to a preset action type A; a preset time period B: 23: 00-7: 00, corresponding to the preset action type B. Assuming that the current local time is 20:00, in the preset time period a: 7: 00-23: 00, the prompting device executes a type A preset action, such as playing a sound by an audio player to prompt the user to be awake and prompting the user to continue reading. Assuming that the current local time is 23:30, in the preset time period B: 23: 00-7: 00, the reading robot can execute B type preset actions, such as playing sound through an audio player to prompt a user to be awake and prompt the user to have a rest as soon as possible, so that the influence on work or learning in the next day is avoided, and even the reading robot can be automatically shut down after prompting. The time period corresponding to the prompt information may be set by the user according to the living habits of the user (for example, the user may set 7: 00-23: 00 to 7: 00-22: 00), and if the user is not self-defined, the default is factory setting.
Further, the monitoring module 1 includes:
the first obtaining submodule is used for obtaining book contents through the first camera;
the first judgment submodule is used for judging whether the book content is changed within the preset time;
the first judgment submodule is used for judging that page turning motion is monitored to occur within the preset time if the book content is changed within the preset time;
and the second judging submodule is used for judging that the page turning action is not monitored in the preset time if the book content is not changed in the preset time.
In this embodiment, the prompt device is provided with the first camera, and the prompt device can scan the book through the first camera, so as to obtain the content of the book (i.e., the text information on the pages). The prompting device can read the book content according to a certain speed (the speed can be preset by factory default or can be set by a user), so that the function of automatic reading is realized. Generally, if a user is in a reading state, after the prompting device has read the book content of the current page for a period of time, the page is manually turned, so that the prompting device obtains new book content. Therefore, the prompting device can judge whether the user turns the page or not by identifying whether the book content is changed within the preset time period or not. If the prompting device can acquire the book content different from the current page through the first camera within the preset time period, the prompting device can judge that the user is monitored to have page turning action within the preset time period, and the user is currently in a reading state. If the prompting device does not change the book content (at this moment, the reading robot does not execute the reading action) within a period of time (such as the digestion time of the reading content and the thinking time) after the device finishes reading or defaults to finish reading the book content of the current page (the preset time is the sum of the reading time and the period of time), it is determined that the user action is not monitored within the preset time.
Preferably, the prompting device can acquire the page number of the page through the first camera, and the prompting device can judge whether the page turning action of the user occurs by judging whether the page number changes within a preset time period. Because the page number of the book has the marking property, the prompting device identifies whether the content of the book is changed or not through the page number, the accuracy is high, and the information processing difficulty is low.
Further, the determining module 3 includes:
the second acquisition submodule is used for acquiring the spacing distance between the user limb and the book placing platform;
the second judgment submodule is used for judging whether the spacing distance is within a preset distance range;
the third obtaining sub-module is used for obtaining the user image if the spacing distance is not within the preset distance range;
the third judgment submodule is used for judging whether the first similarity between the user image and the non-reading image is not smaller than a first threshold value or not;
a third judging submodule, configured to judge that the user is currently in a non-reading state if the first similarity between the user image and the non-reading image is not smaller than a first threshold;
and the fourth judging submodule is used for judging that the user is not in a non-reading state currently if the first similarity between the user image and the non-reading image is smaller than a first threshold value.
In this embodiment, a distance detector (for example, an infrared detector) is disposed on the prompting device, and the prompting device obtains a distance between a current user limb and a book placement platform (for example, a desk) through the distance detector. The limbs of the user are preferably the trunk area or the neck area of the user, and the area of the trunk area is large, so that the distance detector can conveniently detect the distance; although the area of the neck area is small, when the reading posture of the user changes, the change of the distance between the neck area and the book placing platform is more obvious (for example, when the user sleeps and leans backwards, the change of the distance between the neck area of the user and the book placing platform is obviously larger than the change of the distance between the trunk area and the book placing platform). The distance detector detects the distance between the limbs in different areas of the user, and the corresponding preset distance ranges are different (the specific corresponding relationship is set by the designer, and is not described in detail here). And after the prompting device detects the spacing distance, matching the corresponding preset distance range according to the currently detected limb area of the user. Then, the spacing distance is compared with a preset distance range, and whether the spacing distance is within the preset distance range is judged. And if the spacing distance is within the preset distance range, indicating that the user is in a reading state currently. And if the spacing distance is not within the preset distance range, the prompting device acquires the user image through the second camera. The user image comprises a first environment image and a first posture image, and the prompting device screens a non-reading image with the same environment type as the first environment image from a preset image library according to the first environment image. The non-reading image comprises a second posture image, and the prompting device carries out similarity matching on the first posture image and the second posture image to obtain a first similarity between the user image and the non-reading image. Then, comparing the first similarity between the user image and the non-reading image with a preset first threshold value, and judging the size relationship between the user image and the non-reading image. The specific value of the first threshold is set by a designer according to different environments, and is not specifically limited herein. If the first similarity between the user image and the non-reading image is not smaller than the first threshold, the current posture of the user is a sleep posture, and the prompting device judges that the user is in a non-reading state currently. And if the first similarity between the user image and the non-reading image is smaller than a first threshold value, judging that the user is not in the non-reading state currently.
Further, the user image includes a first environment image and a first pose image, the non-read image is included in a preset gallery, the preset gallery includes a plurality of initial non-read images, and the third determining sub-module includes:
the screening unit is used for screening an initial non-reading image with a second similarity degree with the first environment image not smaller than a second threshold value from the preset image library to serve as the non-reading image, and the non-reading image comprises a second posture image;
the matching unit is used for carrying out similarity matching on the first attitude image and each second attitude image and calculating according to a preset rule to obtain a similarity value;
a judging unit configured to judge whether the similarity value is not less than the first threshold;
a first determination unit, configured to determine that a first similarity between the user image and a non-reading image is not less than a first threshold if the similarity value is not less than the first threshold;
and the second judging unit is used for judging that the first similarity between the user image and the non-reading image is smaller than a first threshold value if the similarity value is smaller than the first threshold value.
In this embodiment, the user image includes a first environment image and a first posture image, the first environment image is an image of shapes and layouts of various furniture and articles in the user image, and the first posture image is a shape and a state image of a current limb of the user in the user image. The non-reading image is contained in a preset map library, the preset map library comprises a plurality of initial non-reading images, and each initial non-reading image is a user posture image which is collected by a designer in advance and is in a non-reading state or doze. The initial non-reading image and the non-reading image also comprise a second environment image representing the shapes and the layouts of various furniture and articles and a second posture image representing the shapes and the states of the limbs of the person. The prompting device firstly compares the first environment image with the second environment image in each initial non-reading image one by one, and similarity matching is carried out according to the outer contour shape of the articles in the environment images and the layout positions (such as the spacing between a desk and a chair, the height and other layout positions) of each article, so that second similarity corresponding to each other between the first environment image and each second environment image is obtained. And then comparing each second similarity with a second threshold value, and screening a plurality of initial non-reading images not lower than the second threshold value as the non-reading images of the comparison. The second threshold is set by a designer according to different environments, and is not described in detail herein. The prompting device can screen the non-reading image which is close to the current environment of the user from the preset image library through similarity comparison of the environment images, influence of different environment factors is avoided to the maximum extent (for example, different distances between a chair and a desk can enable the user to have different sitting postures; and chairs of different styles, such as a soft sofa and a hard wooden chair, can enable the user to have different sitting postures), and therefore accuracy of next-step posture image comparison is improved. And the prompting device performs similarity matching on the first attitude image and each second attitude image, and finally calculates to obtain a similarity value according to a preset rule. Specifically, the prompting device first performs similarity matching between the first posture image and second posture images of a plurality of non-reading images obtained by current screening, and calculates initial similarity values respectively corresponding to the second posture images and the first posture image according to similarity of human outer contours in the images (for example, an initial similarity value corresponding to the second posture image a and the first posture image is a, and an initial similarity value corresponding to the second posture image B and the first posture image is a). Then, the presentation device may perform arithmetic mean calculation on each of the initial similarity values to obtain a mean value of each of the initial similarity values, and use the mean value as the current similarity value. Or, the prompting device may sort each initial similarity value in descending order according to the magnitude of the value, and select an initial similarity value that is the most top-ranked (i.e., the value of the initial similarity is the largest) as the current similarity value. The prompting device compares the similarity value with a first threshold value, and if the similarity value is not smaller than the first threshold value, the first similarity between the user image and the non-reading image is judged to be not smaller than the first threshold value. If the similarity value is smaller than the first threshold value, the first similarity between the user image and the non-reading image is judged to be smaller than the first threshold value.
Further, the initial non-reading image includes a second environment image, and the filtering unit includes:
the first calculating subunit is configured to compare the article contour and the article layout of the first environment image with the article contour and the article layout of each second environment image, and calculate to obtain the second similarity corresponding to each initial non-read image;
and the screening subunit is configured to compare the second similarity corresponding to each of the initial non-read images with the second threshold, and screen the initial non-read images with the second similarity not smaller than the second threshold to obtain the non-read images.
In this embodiment, the initial non-reading image includes a second environment image, and the prompting device compares the article contour and the article layout of the first environment image with the article contour and the article layout of each second environment image, so as to calculate a second similarity corresponding to each of the second environment image and each initial non-reading image. Then, the prompting device compares the second similarity corresponding to each initial non-reading image with a second threshold, so as to obtain the initial non-reading image with the second similarity not less than the second threshold as the non-reading image through screening.
Preferably, there may be a plurality of non-reading images obtained by the current screening, the prompting device may calculate a first ratio between the outer contour size of the person image in the user image and the outer contour size of the seat image (for convenience of calculation, the reading robot may extract the height of the outer contour of the image for comparison, so as to calculate the first ratio, and a second ratio described below may also be simply calculated by using the method), and then calculate a second ratio between the outer contour size of the person image in each non-reading image and the outer contour size of the seat image. The prompting device compares each second proportion with the first proportion, and therefore a non-reading image corresponding to the second proportion closest to the first proportion is screened to serve as the current first similarity calculation. In this embodiment, the non-read image corresponding to the second ratio closest to the first ratio is screened to perform the first similarity calculation, so that when the similarity matching between the user image and the non-read image is performed to the greatest extent, interference of other factors except the person image between the user image and the non-read image (for example, when the user sits on a seat with the same size and reads, the body shape of the user is affected by different heights), and the accuracy is further improved.
Further, the matching unit includes:
the matching subunit is configured to perform similarity matching on the first posture image and each of the second posture images according to an outer contour of the posture image, so as to obtain initial similarity values corresponding to the second posture image and the first posture image;
and the second calculating subunit is used for calculating the average value of the initial similarity values to obtain the similarity value.
In this embodiment, the prompting device performs similarity matching on the outer contour of the first posture image and the outer contour of the second posture image, and calculates to obtain initial similarity values corresponding to the second posture images according to the coincidence degree of the outer contours. Preferably, before similarity matching, the prompting device scales the first posture image to be in equal proportion to the second posture image, so that the coincidence degree between the outer contours of the first posture image and the outer contours of the second posture image is compared conveniently. Since there are a plurality of second pose images, there are a plurality of initial similarity values correspondingly calculated. And the prompting device adds the initial similarity values to obtain a sum, and then performs average calculation on the sum according to the number of the second posture images so as to obtain the similarity value.
Further, the prompting device further includes:
the first storage module 5 is configured to store the image corresponding to the user posture in a sleep gallery if the user is currently in a non-reading state, where the sleep gallery is associated with an account of the user;
and the second storage module 6 is configured to store the image corresponding to the user posture in a reading gallery if the user is not in a non-reading state currently, where the reading gallery is associated with the account of the user.
In this embodiment, when the user uses the prompting device, the user can log in the prompting device through the account, and relevant personal settings (such as the above-mentioned rest time period setting) for the prompting device are realized. The prompting device is internally constructed with a sleep gallery and a reading gallery which are related to a user account, the sleep gallery is used for storing images of the sleep posture of the user, and the reading gallery is used for storing images of the reading posture of the user. The prompting device judges whether the user is in a non-reading state currently or not according to the user posture, and stores an image corresponding to the current user posture (namely, the user image acquired at the current time) in the sleep image library if the user is in the non-reading state currently. And if the user is not in the non-reading state currently, storing the image corresponding to the current user posture (namely the user image acquired at the current time) in the reading gallery. When the prompting device identifies whether the user is in a non-reading state according to the image next time, the image in the sleep gallery and the image in the reading gallery which are related to the user account number are preferentially selected as a comparison basis. Because the images in the sleep gallery and the reading gallery associated with the user account are acquired through the images of the user, the accuracy is higher when the reading posture of the user is identified.
In the reading state identification device provided by this embodiment, the prompt device monitors whether a page turning action occurs within a preset time; if the page turning action does not occur within the preset time, acquiring the current user posture, and then judging whether the user is in a non-reading state currently or not according to the user posture; and if the user is in a non-reading state, executing a preset action. The reading robot can effectively identify the reading state of a user by monitoring the page turning action and the posture of the user in the reading process of the user; and when the user is identified to be in a non-reading state, the user is reminded by executing a preset action, so that the reading quality of the user is ensured.
Referring to fig. 3, a computer device, which may be a server and whose internal structure may be as shown in fig. 3, is also provided in the embodiment of the present application. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the computer designed processor is used to provide computational and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer equipment is used for storing data such as a preset gallery and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a reading status recognition method.
The processor executes the reading state identification method and comprises the following steps:
s1, monitoring whether page turning action occurs within a preset time;
s2, if no page turning action occurs within the preset time, acquiring the current user gesture;
s3, judging whether the user is in a non-reading state at present or not according to the user posture;
s4, if the user is not reading, executing the preset action.
Further, the step of monitoring whether a page turning action occurs within a preset time includes:
s101, acquiring book contents through a first camera;
s102, judging whether the book content is changed within the preset time;
s103, if the book content is changed within the preset time, judging that page turning motion is monitored within the preset time;
and S104, if the book content is not changed within the preset time, judging that the page turning action is not monitored within the preset time.
Further, the step of judging whether the user is currently in a non-reading state through the user posture includes:
s301, acquiring the spacing distance between the user limb and the book placing platform;
s302, judging whether the spacing distance is within a preset distance range;
s303, if the spacing distance is not within a preset distance range, acquiring a user image;
s304, judging whether the first similarity between the user image and the non-reading image is not less than a first threshold value;
s305, if the first similarity between the user image and the non-reading image is not less than a first threshold value, judging that the user is in a non-reading state currently;
s306, if the first similarity between the user image and the non-reading image is smaller than a first threshold value, judging that the user is not in a non-reading state currently.
Further, the step of determining whether the first similarity between the user image and the non-reading image is not less than a first threshold includes:
s3041, screening an initial non-reading image with a second similarity not smaller than a second threshold value with the first environment image from the preset image library, and taking the initial non-reading image as the non-reading image, wherein the non-reading image comprises a second posture image;
s3042, carrying out similarity matching on the first posture image and each second posture image, and calculating according to a preset rule to obtain a similarity value;
s3043, judging whether the similarity value is not less than the first threshold value;
s3044, if the similarity value is not less than the first threshold, determining that the first similarity between the user image and the non-reading image is not less than the first threshold;
s3045, if the similarity value is smaller than the first threshold, determining that the first similarity between the user image and the non-reading image is smaller than the first threshold.
Further, the step of screening out, from the preset gallery, an initial non-read image having a second similarity to the first environment image that is not less than a second threshold as the non-read image includes:
s30411, comparing the article outline and the article layout of the first environment image with the article outline and the article layout of each second environment image, and calculating to obtain the second similarity corresponding to each initial non-reading image;
s30412, comparing the second similarity corresponding to each initial non-reading image with the second threshold value, and screening to obtain the initial non-reading image with the second similarity not less than the second threshold value as the non-reading image.
Further, the step of performing similarity matching between the first posture image and each of the second posture images and calculating a similarity value according to a preset rule includes:
s30421, respectively carrying out similarity matching on the first posture image and each second posture image according to the outline of the posture image to obtain initial similarity values respectively corresponding to the second posture image and the first posture image;
s30422, calculating the average value of the initial similarity values to obtain the similarity value.
Further, after the step of determining whether the user is currently in a non-reading state according to the user gesture if the page turning action does not occur within the preset time, the method includes:
s5, if the user is not reading, storing the image corresponding to the user gesture in a sleep gallery, wherein the sleep gallery is associated with the account of the user;
and S6, if the user is not in a non-reading state, storing the image corresponding to the user posture in a reading gallery, wherein the reading gallery is associated with the account of the user.
An embodiment of the present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a reading status identification method, and specifically includes:
s1, monitoring whether page turning action occurs within a preset time;
s2, if no page turning action occurs within the preset time, acquiring the current user gesture;
s3, judging whether the user is in a non-reading state at present or not according to the user posture;
s4, if the user is not reading, executing the preset action.
Further, the step of monitoring whether a page turning action occurs within a preset time includes:
s101, acquiring book contents through a first camera;
s102, judging whether the book content is changed within the preset time;
s103, if the book content is changed within the preset time, judging that page turning motion is monitored within the preset time;
and S104, if the book content is not changed within the preset time, judging that the page turning action is not monitored within the preset time.
Further, the step of judging whether the user is currently in a non-reading state through the user posture includes:
s301, acquiring the spacing distance between the user limb and the book placing platform;
s302, judging whether the spacing distance is within a preset distance range;
s303, if the spacing distance is not within a preset distance range, acquiring a user image;
s304, judging whether the first similarity between the user image and the non-reading image is not less than a first threshold value;
s305, if the first similarity between the user image and the non-reading image is not less than a first threshold value, judging that the user is in a non-reading state currently;
s306, if the first similarity between the user image and the non-reading image is smaller than a first threshold value, judging that the user is not in a non-reading state currently.
Further, the step of determining whether the first similarity between the user image and the non-reading image is not less than a first threshold includes:
s3041, screening an initial non-reading image with a second similarity not smaller than a second threshold value with the first environment image from the preset image library, and taking the initial non-reading image as the non-reading image, wherein the non-reading image comprises a second posture image;
s3042, carrying out similarity matching on the first posture image and each second posture image, and calculating according to a preset rule to obtain a similarity value;
s3043, judging whether the similarity value is not less than the first threshold value;
s3044, if the similarity value is not less than the first threshold, determining that the first similarity between the user image and the non-reading image is not less than the first threshold;
s3045, if the similarity value is smaller than the first threshold, determining that the first similarity between the user image and the non-reading image is smaller than the first threshold.
Further, the step of screening out, from the preset gallery, an initial non-read image having a second similarity to the first environment image that is not less than a second threshold as the non-read image includes:
s30411, comparing the article outline and the article layout of the first environment image with the article outline and the article layout of each second environment image, and calculating to obtain the second similarity corresponding to each initial non-reading image;
s30412, comparing the second similarity corresponding to each initial non-reading image with the second threshold value, and screening to obtain the initial non-reading image with the second similarity not less than the second threshold value as the non-reading image.
Further, the step of performing similarity matching between the first posture image and each of the second posture images and calculating a similarity value according to a preset rule includes:
s30421, respectively carrying out similarity matching on the first posture image and each second posture image according to the outline of the posture image to obtain initial similarity values respectively corresponding to the second posture image and the first posture image;
s30422, calculating the average value of the initial similarity values to obtain the similarity value.
Further, after the step of determining whether the user is currently in a non-reading state according to the user gesture if the page turning action does not occur within the preset time, the method includes:
s5, if the user is not reading, storing the image corresponding to the user gesture in a sleep gallery, wherein the sleep gallery is associated with the account of the user;
and S6, if the user is not in a non-reading state, storing the image corresponding to the user posture in a reading gallery, wherein the reading gallery is associated with the account of the user.
It will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by hardware associated with instructions of a computer program, which may be stored on a non-volatile computer-readable storage medium, and when executed, may include processes of the above embodiments of the methods. Any reference to memory, storage, database, or other medium provided herein and used in the examples may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double-rate SDRAM (SSRSDRAM), Enhanced SDRAM (ESDRAM), synchronous link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, apparatus, article, or method that includes the element.
The above description is only for the preferred embodiment of the present application and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings of the present application, or which are directly or indirectly applied to other related technical fields, are intended to be included within the scope of the present application.

Claims (10)

1. A reading status recognition method, comprising:
monitoring whether page turning action occurs within preset time;
if the page turning action does not occur within the preset time, acquiring the current user posture;
judging whether the user is in a non-reading state at present or not according to the user posture;
and if the user is in a non-reading state, executing a preset action.
2. The reading status recognition method of claim 1, wherein the step of monitoring whether a page turning action occurs within a preset time comprises:
acquiring book contents through a first camera;
judging whether the book content is changed within the preset time;
if the book content is changed within the preset time, judging that page turning motion is monitored within the preset time;
and if the book content is not changed within the preset time, judging that the page turning action is not monitored within the preset time.
3. The reading-state recognition method of claim 1, wherein the step of determining whether the user is currently in the non-reading state through the user gesture comprises:
acquiring the spacing distance between the limbs of the user and the book placing platform;
judging whether the spacing distance is within a preset distance range;
if the spacing distance is not within the preset distance range, acquiring a user image;
judging whether the first similarity between the user image and the non-reading image is not less than a first threshold value or not;
if the first similarity between the user image and the non-reading image is not smaller than a first threshold value, judging that the user is in a non-reading state currently;
and if the first similarity between the user image and the non-reading image is smaller than a first threshold value, judging that the user is not in a non-reading state currently.
4. The reading status recognition method of claim 3, wherein the user image comprises a first environment image and a first pose image, the non-reading image is included in a preset gallery, the preset gallery includes a plurality of initial non-reading images, and the step of determining whether the first similarity between the user image and the non-reading image is not less than a first threshold value comprises:
screening an initial non-reading image with a second similarity to the first environment image not smaller than a second threshold value from the preset image library to serve as the non-reading image, wherein the non-reading image comprises a second posture image;
carrying out similarity matching on the first attitude image and each second attitude image, and calculating according to a preset rule to obtain a similarity value;
judging whether the similarity value is not smaller than the first threshold value;
if the similarity value is not smaller than the first threshold value, judging that the first similarity between the user image and the non-reading image is not smaller than the first threshold value;
and if the similarity value is smaller than the first threshold value, judging that the first similarity between the user image and the non-reading image is smaller than the first threshold value.
5. The reading-state recognition method of claim 4, wherein the initial non-reading image comprises a second environment image, and the step of screening out the initial non-reading image with a second similarity to the first environment image not less than a second threshold from the preset gallery as the non-reading image comprises:
comparing the article outline and the article layout of the first environment image with the article outline and the article layout of each second environment image, and calculating to obtain the second similarity corresponding to each initial non-reading image;
and comparing the second similarity corresponding to each initial non-reading image with the second threshold value, and screening to obtain the initial non-reading image with the second similarity not less than the second threshold value as the non-reading image.
6. The reading-state recognition method of claim 4, wherein the step of performing similarity matching between the first-pose image and each of the second-pose images and calculating a similarity value according to a preset rule comprises:
according to the outer contour of the attitude image, respectively carrying out similarity matching on the first attitude image and each second attitude image to obtain initial similarity values respectively corresponding to the second attitude image and the first attitude image;
and calculating the average value of the initial similarity values to obtain the similarity value.
7. The reading state identification method according to claim 1, wherein after the step of determining whether the user is currently in the non-reading state by the user gesture if the page turning action does not occur within the preset time, the reading state identification method comprises:
if the user is in a non-reading state, storing the image corresponding to the user posture in a sleep image library, wherein the sleep image library is associated with the account number of the user;
and if the user is not in a non-reading state at present, storing the image corresponding to the user posture in a reading gallery, wherein the reading gallery is associated with the account number of the user.
8. A reading state recognition apparatus, comprising:
the monitoring module is used for monitoring whether page turning action occurs within preset time;
the acquisition module is used for acquiring the current user posture if no page turning action occurs within the preset time;
the judging module is used for judging whether the user is in a non-reading state currently or not according to the user posture;
and the execution module is used for executing the preset action if the user is in a non-reading state currently.
9. A computer device comprising a memory and a processor, the memory having stored therein a computer program, characterized in that the processor, when executing the computer program, implements the steps of the method according to any one of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202010687691.2A 2020-07-16 2020-07-16 Reading state identification method, device, computer equipment and readable storage medium Active CN111967327B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010687691.2A CN111967327B (en) 2020-07-16 2020-07-16 Reading state identification method, device, computer equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010687691.2A CN111967327B (en) 2020-07-16 2020-07-16 Reading state identification method, device, computer equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN111967327A true CN111967327A (en) 2020-11-20
CN111967327B CN111967327B (en) 2024-06-14

Family

ID=73361870

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010687691.2A Active CN111967327B (en) 2020-07-16 2020-07-16 Reading state identification method, device, computer equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN111967327B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114245068A (en) * 2021-10-29 2022-03-25 安徽淘云科技股份有限公司 Behavior supervision method and device, electronic equipment and storage medium
CN117042257A (en) * 2023-09-21 2023-11-10 永林电子股份有限公司 Multistage dimming LED lamp adjustment control method and device and electronic equipment thereof

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101035169A (en) * 2006-03-06 2007-09-12 英业达股份有限公司 Status prompting system and method
CN105528577A (en) * 2015-12-04 2016-04-27 深圳大学 Identification method based on intelligent glasses
CN107958212A (en) * 2017-11-20 2018-04-24 珠海市魅族科技有限公司 A kind of information cuing method, device, computer installation and computer-readable recording medium
CN108376031A (en) * 2018-03-30 2018-08-07 百度在线网络技术(北京)有限公司 Method, apparatus, storage medium and the terminal device of reading page page turning
CN110231871A (en) * 2019-06-14 2019-09-13 腾讯科技(深圳)有限公司 Page reading method, device, storage medium and electronic equipment
CN110286989A (en) * 2019-06-28 2019-09-27 掌阅科技股份有限公司 Reading tip method, electronic equipment and computer storage medium
KR102041259B1 (en) * 2018-12-20 2019-11-06 최세용 Apparatus and Method for Providing reading educational service using Electronic Book
CN110443224A (en) * 2019-08-14 2019-11-12 广东小天才科技有限公司 Page turning detection method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101035169A (en) * 2006-03-06 2007-09-12 英业达股份有限公司 Status prompting system and method
CN105528577A (en) * 2015-12-04 2016-04-27 深圳大学 Identification method based on intelligent glasses
CN107958212A (en) * 2017-11-20 2018-04-24 珠海市魅族科技有限公司 A kind of information cuing method, device, computer installation and computer-readable recording medium
CN108376031A (en) * 2018-03-30 2018-08-07 百度在线网络技术(北京)有限公司 Method, apparatus, storage medium and the terminal device of reading page page turning
KR102041259B1 (en) * 2018-12-20 2019-11-06 최세용 Apparatus and Method for Providing reading educational service using Electronic Book
CN110231871A (en) * 2019-06-14 2019-09-13 腾讯科技(深圳)有限公司 Page reading method, device, storage medium and electronic equipment
CN110286989A (en) * 2019-06-28 2019-09-27 掌阅科技股份有限公司 Reading tip method, electronic equipment and computer storage medium
CN110443224A (en) * 2019-08-14 2019-11-12 广东小天才科技有限公司 Page turning detection method and device, electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114245068A (en) * 2021-10-29 2022-03-25 安徽淘云科技股份有限公司 Behavior supervision method and device, electronic equipment and storage medium
CN117042257A (en) * 2023-09-21 2023-11-10 永林电子股份有限公司 Multistage dimming LED lamp adjustment control method and device and electronic equipment thereof
CN117042257B (en) * 2023-09-21 2024-02-06 永林电子股份有限公司 Multistage dimming LED lamp adjustment control method and device and electronic equipment thereof

Also Published As

Publication number Publication date
CN111967327B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
US10468025B2 (en) Speech interaction method and apparatus
CN111967327B (en) Reading state identification method, device, computer equipment and readable storage medium
KR101574884B1 (en) Facial gesture estimating apparatus, controlling method, controlling program, and recording medium
CN111067300B (en) Mattress control method and device, electronic equipment and storage medium
JP2006293644A (en) Information processing device and information processing method
US20180276732A1 (en) Skin product fitting method and electronic apparatus therefor
CN108924500A (en) Intelligent elevated table camera shooting based reminding method, device, intelligent elevated table and storage medium
JP6928369B2 (en) Information processing system and program
JPWO2018179325A1 (en) Registration apparatus, authentication apparatus, personal authentication system, personal authentication method, program, and recording medium
CN112220212B (en) Table/chair adjusting system and method based on face recognition
CN111125533A (en) Menu recommendation method and device and computer readable storage medium
CN114594694A (en) Equipment control method and device, intelligent pad and storage medium
JP2010086478A (en) Authentication method, authentication program, and information processing apparatus
CN115568716A (en) Adaptive control method for air bag mattress, air bag mattress and storage medium
CN116189895A (en) Control method and device of health detection equipment, computer equipment and storage medium
CN113940523B (en) Self-adjusting method and device of intelligent mattress, intelligent mattress and storage medium
CN107728501A (en) A kind of intelligent seat regulation and control method based on time detecting
CN109409322B (en) Living body detection method and device, face recognition method and face detection system
CN109820523A (en) Psychological tester control method, device and computer readable storage medium
CN107467946A (en) A kind of intelligent seat regulator control system based on time detecting
JP2012014650A (en) Mental/physical condition control apparatus
CN111291626A (en) Recipe recommendation method, device and system
CN111432131A (en) Photographing frame selection method and device, electronic equipment and storage medium
JP2002216133A (en) Image processing system and sense information processing system
JP2018136869A (en) Health management support apparatus and health management support method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240513

Address after: 518000 floor 1, building 3, Dexin Chang wisdom Park, No. 23 Heping Road, Qinghua community, Longhua street, Longhua District, Shenzhen, Guangdong

Applicant after: Shenzhen waterward Information Co.,Ltd.

Country or region after: China

Address before: 518000 B, 503, 602, digital city building, garden city, 1079 Shekou Road, Shekou, Nanshan District, Shenzhen, Guangdong.

Applicant before: SHENZHEN WATER WORLD Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant