CN115662575B - Dynamic image generation and playing method based on meditation training - Google Patents

Dynamic image generation and playing method based on meditation training Download PDF

Info

Publication number
CN115662575B
CN115662575B CN202211704792.1A CN202211704792A CN115662575B CN 115662575 B CN115662575 B CN 115662575B CN 202211704792 A CN202211704792 A CN 202211704792A CN 115662575 B CN115662575 B CN 115662575B
Authority
CN
China
Prior art keywords
meditation
dynamic
image
moment
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211704792.1A
Other languages
Chinese (zh)
Other versions
CN115662575A (en
Inventor
韩璧丞
苏度
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mental Flow Technology Co Ltd
Original Assignee
Shenzhen Mental Flow Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mental Flow Technology Co Ltd filed Critical Shenzhen Mental Flow Technology Co Ltd
Priority to CN202211704792.1A priority Critical patent/CN115662575B/en
Publication of CN115662575A publication Critical patent/CN115662575A/en
Application granted granted Critical
Publication of CN115662575B publication Critical patent/CN115662575B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Electrically Operated Instructional Devices (AREA)

Abstract

The invention discloses a method for generating and playing dynamic images based on meditation training, which is characterized in that for each moment in the meditation training process, dynamic images and background sounds of the moment are generated according to the variation of the looseness of meditation users at the moment and the previous moment, and the looseness can reflect the current concentration degree of meditation users, so that the generated dynamic images and background sounds are equivalent to the mental variation of the meditation users from the previous moment to the current moment, and the recording mode of graph-sound combination is easier for meditation users to generate substitution feeling, thereby helping the meditation users to know the current meditation state and helping the meditation users to follow up the training process. The problem that the prior art lacks a recording method of mental change of meditation users, and the meditation users are hard to recover the training process only by means of the memory of the meditation users is solved, so that the meditation training effect is affected.

Description

Dynamic image generation and playing method based on meditation training
Technical Field
The invention relates to the field of image processing, in particular to a dynamic image generation and playing method based on meditation training.
Background
The process of meditation training is mainly autonomous control of mental processes by users, and at present, a recording method of mental changes of users in the training process is lacking, and the meditation training process is difficult to be repeated only by depending on the memory of the meditation users, so that the meditation training effect is affected.
Accordingly, there is a need for improvement and development in the art.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a dynamic image generating and playing method based on meditation training aiming at the defects of the prior art, aiming at solving the problems that the prior art lacks a mental change recording method of meditation users, and the meditation users are hard to recover the disc training process only by relying on the memory of the meditation users, so that the meditation training effect is affected.
The technical scheme adopted by the invention for solving the problems is as follows:
in a first aspect, an embodiment of the present invention provides a method for generating and playing a dynamic image based on meditation training, wherein the method includes:
acquiring the degree of relaxation of a meditation user at the previous moment, a dynamic image and the degree of relaxation at the current moment, wherein the degree of relaxation is used for reflecting the degree of concentration of the meditation user on meditation, the dynamic image at the previous moment comprises a foreground image and a background image, and the foreground image comprises a plurality of dynamic objects;
according to the difference value of the relaxation degrees of the current moment and the previous moment, adjusting the picture occupation ratio of the foreground image and the background image;
according to the adjusted picture duty ratio, the number and the category of each dynamic object in the foreground image are adjusted;
determining a background sound type according to the category of each adjusted dynamic object, determining a background sound volume according to the number of each adjusted dynamic object, and determining a background sound at the current moment according to the background sound type and the background sound volume;
and determining the dynamic image at the current moment according to the regulated dynamic image, and playing the dynamic image at the current moment and the background sound to the meditation user.
In one embodiment, if the current time is an initial time at which the meditation training is performed by the meditation user, the method for acquiring the moving image at the initial time includes:
acquiring the environment information of the meditation user and the degree of relaxation at the initial time;
determining an initial picture duty ratio according to the relaxation degree at the initial moment;
determining target theme scenes according to the environment information, wherein different theme scenes respectively correspond to different types of static objects and dynamic objects;
and according to the initial picture duty ratio, arranging all static objects and all dynamic objects corresponding to the target theme scene to obtain the dynamic image at the initial moment.
In one embodiment, the method for obtaining the playing degree at each moment includes:
acquiring electroencephalogram data and action data of the meditation user at the moment;
and determining the looseness at the moment according to the electroencephalogram data and the action data.
In one embodiment, the method further comprises:
acquiring a plurality of preset filter parameters, wherein each filter parameter corresponds to a different relaxation degree interval;
determining a target filter parameter from the filter parameters according to the relaxation degree at the current moment;
and displaying the dynamic image at the current moment according to the target filter parameters.
In one embodiment, the method further comprises:
acquiring a preset relaxation degree threshold value, and judging whether the relaxation degree at the current moment reaches the relaxation degree threshold value or not;
judging that the meditation user enters a fixed state if the relaxation degree at the current moment reaches the relaxation degree threshold value;
and all the dynamic objects in the dynamic image at the current moment are adjusted to be static.
In one embodiment, the method further comprises:
acquiring the continuous time length of the meditation user entering the fixed state and a preset time length threshold;
judging whether the continuous time length reaches the time length threshold value or not;
and displaying reminding information for stopping meditation on the dynamic image at the current moment when the continuous duration reaches the duration threshold.
In one embodiment, the method further comprises:
acquiring all the dynamic images generated from the initial time to the time period when the meditation user enters a fixed state;
and generating meditation display video of the meditation user according to all the dynamic images.
In a second aspect, an embodiment of the present invention further provides a dynamic image generating and playing device based on meditation training, where the device includes:
the system comprises an acquisition module, a storage module and a storage module, wherein the acquisition module is used for acquiring the degree of relaxation of a meditation user at the previous moment, a dynamic image and the degree of relaxation at the current moment, wherein the degree of relaxation is used for reflecting the degree that the meditation user is focused on meditation, the dynamic image at the previous moment comprises a foreground image and a background image, and the foreground image comprises a plurality of dynamic objects;
the adjustment module is used for adjusting the picture duty ratio of the foreground image and the background image according to the difference value of the relaxation degrees of the current moment and the previous moment;
according to the adjusted picture duty ratio, the number and the category of each dynamic object in the foreground image are adjusted;
determining a background sound type according to the category of each adjusted dynamic object, determining a background sound volume according to the number of each adjusted dynamic object, and determining a background sound at the current moment according to the background sound type and the background sound volume;
and the playing module is used for determining the dynamic image at the current moment according to the adjusted dynamic image and playing the dynamic image at the current moment and the background sound to the meditation user.
In a third aspect, an embodiment of the present invention further provides a terminal, where the terminal includes a memory and one or more processors; the memory stores more than one program; the program contains instructions for executing the meditation training-based moving image generation and playback method as described in any one of the above; the processor is configured to execute the program.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium having a plurality of instructions stored thereon, where the instructions are adapted to be loaded and executed by a processor to implement the steps of any of the above-described meditation training-based dynamic image generation and playback methods.
The invention has the beneficial effects that: the embodiment of the invention generates the dynamic image and background sound of each moment in the meditation training process according to the variation of the degree of looseness of the meditation user at the moment and the moment before, and the degree of looseness can reflect the current concentration of the meditation user, so that the generated dynamic image and background sound are equivalent to the mental variation from the moment before the meditation user to the current moment, and the recording mode of combining the image and the sound is easier for the meditation user to generate substitution feeling, thereby helping the meditation user to know the current meditation state of the meditation user and helping the meditation user to resume the training process later. The problem that the prior art lacks a recording method of mental change of meditation users, and the meditation users are hard to recover the training process only by means of the memory of the meditation users is solved, so that the meditation training effect is affected.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings may be obtained according to the drawings without inventive effort to those skilled in the art.
Fig. 1 is a flowchart illustrating a method for generating and playing dynamic images based on meditation training according to an embodiment of the present invention.
Fig. 2 is a block diagram of a dynamic image generating and playing device based on meditation training according to an embodiment of the present invention.
Fig. 3 is a schematic block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The invention discloses a dynamic image generating and playing method based on meditation training, which is used for making the purposes, technical schemes and effects of the invention clearer and more definite, and is further described in detail below by referring to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. The term "and/or" as used herein includes all or any element and all combination of one or more of the associated listed items.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In view of the above-mentioned drawbacks of the prior art, the present invention provides a method for generating and playing dynamic images based on meditation training, the method comprising: acquiring the degree of relaxation of a meditation user at the previous moment, a dynamic image and the degree of relaxation at the current moment, wherein the degree of relaxation is used for reflecting the degree of concentration of the meditation user on meditation, the dynamic image at the previous moment comprises a foreground image and a background image, and the foreground image comprises a plurality of dynamic objects; according to the difference value of the relaxation degrees of the current moment and the previous moment, adjusting the picture occupation ratio of the foreground image and the background image; according to the adjusted picture duty ratio, the number and the category of each dynamic object in the foreground image are adjusted; determining a background sound type according to the category of each adjusted dynamic object, determining a background sound volume according to the number of each adjusted dynamic object, and determining a background sound at the current moment according to the background sound type and the background sound volume; and determining the dynamic image at the current moment according to the regulated dynamic image, and playing the dynamic image at the current moment and the background sound to the meditation user. The invention generates the dynamic image and background sound of each moment in the meditation training process according to the variation of the degree of looseness of meditation users at the moment and the moment before, and the degree of looseness can reflect the current concentration of meditation users, so the generated dynamic image and background sound are equivalent to the mental variation of the meditation users from the moment before to the moment before being recorded, and the recording mode of the graph-sound combination is easier for the meditation users to generate substitution feeling, thereby helping the meditation users to know the current meditation state of themselves and helping the meditation users to resume the training process later. The problem that the prior art lacks a recording method of mental change of meditation users, and the meditation users are hard to recover the training process only by means of the memory of the meditation users is solved, so that the meditation training effect is affected.
As shown in fig. 1, the method includes:
step S100, obtaining the degree of relaxation of the meditation user at the previous moment, a dynamic image and the degree of relaxation at the current moment, wherein the degree of relaxation is used for reflecting the degree of concentration of the meditation user on meditation, the dynamic image at the previous moment comprises a foreground image and a background image, and the foreground image comprises a plurality of dynamic objects.
Specifically, the meditation user may be any one of users who perform meditation training. The embodiment will evaluate the degree of concentration of the meditation user during training in real time, and the degree of relaxation at each moment is obtained according to the degree of concentration, the higher the degree of relaxation is, the calm the meditation user's current mood is, and the higher the degree of concentration he performs meditation training. In order to record the mental change of the meditation user during the training process, the embodiment generates a dynamic image with background sound for each moment, and reflects the mental state of the meditation user at the current moment through the dynamic image. Specifically, the dynamic image at each time is generated by adjusting the dynamic image at the previous time according to the change of the relaxation degree from the previous time to the current time, so that the psychological change of the meditation user can be known through the change of the dynamic images at the adjacent time. Compared with a numerical value recording mode, the change of the dynamic image can reflect the mood fluctuation of the meditation user more easily, and the meditation user has stronger substitution feeling, so that the meditation user can be helped to multiplex the training process.
In one implementation, if the current time is an initial time at which the meditation training is performed by the meditation user, the method for acquiring the moving image at the initial time includes:
step S10, acquiring the environment information of the meditation user and the relaxation degree at the initial moment;
step S11, determining an initial picture duty ratio according to the relaxation degree at the initial moment;
step S12, determining target theme scenes according to the environment information, wherein different theme scenes respectively correspond to different types of static objects and dynamic objects;
and S13, according to the initial picture duty ratio, arranging all static objects and all dynamic objects corresponding to the target theme scene to obtain the dynamic image at the initial moment.
If the current time is the first time of meditation training, it means that the current time is to generate the first moving image in meditation training. In order to further enhance the sense of substitution of the meditation user, the present embodiment presents the relevant factors of the current environment of the meditation user in the moving image. Specifically, environmental information of meditation users, such as air temperature, illumination and the like, is obtained, the most consistent target theme scenes are screened out from a plurality of preset theme scenes according to the environmental information, and then the categories of static objects and dynamic objects contained in an interface are determined according to the target theme scenes. And determining initial positions corresponding to each static object and each dynamic object respectively according to the initial picture duty ratio set based on the initial looseness, and then carrying out object arrangement to obtain the first dynamic image.
In another implementation, the method further comprises:
acquiring a scene switching instruction input by a user;
and replacing the theme scenes according to the scene switching instruction, and replacing the category of each dynamic object in the foreground image and the category of each static object in the background image according to the replaced theme scenes.
For example, the theme scene may be a forest scene, the corresponding static objects may include trees, flowers and plants, and the dynamic objects may include birds, flowing water.
In one implementation manner, the method for acquiring the release rate at each moment includes:
step S101, acquiring electroencephalogram data and action data of the meditation user at the moment;
step S102, determining the looseness at the moment according to the electroencephalogram data and the action data.
Specifically, the meditation users are in different concentration degrees, the data characteristics of the brain electrical data are different, and the characteristics of the limb actions are also different, so that the looseness of the current moment can be determined through the brain electrical data and the action data of the meditation users.
In one implementation, the step S102 specifically includes:
determining current band lengths corresponding to three bands of a distraction wave (THETA), an attention wave (SMR) and a tension wave (Hi-beta) of the meditation user according to the electroencephalogram data;
the method comprises the steps of obtaining the original wave band lengths corresponding to three wave bands respectively, wherein the original wave band lengths are wave band lengths collected by meditation users before training starts;
determining the length variation corresponding to the three wave bands according to the current wave band length and the original wave band length corresponding to the three wave bands respectively;
and determining the corresponding relaxation degree of the meditation user according to the length variation quantity corresponding to the three wave bands respectively.
Specifically, the three bands of the distraction wave (THETA), the attention wave (SMR) and the tension wave (Hi-beta) are bands closely related to the concentration of a person. In order to avoid that the individual difference affects the calculation result, the present embodiment does not directly determine the pitch of the meditation user from the band lengths of the three bands, but determines the pitch of the meditation user by analyzing the length variation amounts of the three bands from the start of training to the current time.
As shown in fig. 1, the method further includes:
step 200, adjusting the picture duty ratio of the foreground image and the background image according to the difference value of the relaxation degrees of the current moment and the previous moment;
specifically, the moving image in the present embodiment mainly includes two parts, one being a foreground image and the other being a background image. The foreground image includes a number of dynamic objects and the background image includes a number of static objects. The picture ratio of the foreground image and the background image can directly influence the quantity proportion of static objects and dynamic objects, so that different impressions can be formed for people. According to the embodiment, the variation of the concentration degree of the meditation user during training is quantified by acquiring the deviation of the looseness between the current moment and the previous moment, and the picture ratio of the foreground image to the background image is adjusted by the deviation of the looseness, so that different impressions are given to the meditation user, and the psychological variation of the meditation user is vividly reflected.
As shown in fig. 1, the method further includes:
and step S300, adjusting the number and the category of each dynamic object in the foreground image according to the adjusted picture duty ratio.
Specifically, after the screen duty ratio is adjusted, the number and types of the static objects and the dynamic objects need to be adjusted according to the screen duty ratio. For example, after the picture proportion is adjusted, the interface area corresponding to the background image is enlarged, the number and the category of static objects are required to be increased, the number and the category of dynamic objects are reduced, and the proportion of the static objects and the dynamic objects is adjusted to display the image which accords with the adjusted picture proportion.
In one implementation manner, each of the static objects and each of the dynamic objects correspond to different types of objects, and the step S300 specifically includes:
and acquiring an image dividing line corresponding to the foreground image and the background image at the previous moment, wherein one side of the image dividing line is the dynamic object, and the other side of the image dividing line is the static object.
According to the adjusted picture duty ratio, adjusting the position of the image dividing line;
and adjusting the dynamic and static states of the objects at the two sides of the image dividing line according to the adjusted image dividing line.
Specifically, the dynamic and static states of the object on the interface are determined based on the image dividing lines of the foreground image and the background image in the present embodiment. After the picture proportion is determined, the position of the image dividing line is determined again according to the picture proportion, and then the dynamic and static states of objects at two sides are adjusted according to the position of the determined image dividing line, so that the presented dynamic image can be consistent with the adjusted picture proportion. It will be appreciated that objects of the same category have different states on different sides of the image split line, for example, the grass is static on one side of the image split line, dynamic on the other side after adjusting the image split line, and swaying with the wind.
As shown in fig. 1, the method further includes:
step 400, determining a background sound type according to the category of each adjusted dynamic object, determining a background sound volume according to the number of each adjusted dynamic object, and determining a background sound at the current moment according to the background sound type and the background sound volume.
Specifically, each static object and each dynamic object on the dynamic image are objects of different categories, and in this embodiment, the dynamic object is defined to have sound properties, such as rolling wave having a wave sound, and a tree having a sand sound of leaves. Therefore, the background sound type of the dynamic image at the current moment needs to be determined based on the adjusted category of each dynamic object, that is, the background sound type is a composite sound of object sounds corresponding to each dynamic object. Furthermore, both an increase and a decrease in the number of dynamic objects are associated with mood changes of meditation users. An increase in the number of dynamic objects means that the concentration level of the meditation user currently performing meditation training is reduced, the mood is more restless, and the volume of background sounds is increased to vividly show the current restless state of the meditation user. A decrease in the number of dynamic objects indicates that the degree of concentration of the meditation user currently performing meditation training is high, the mood is calm, and the volume of background sounds is low. The embodiment adopts the mode of combining the picture and the sound, so that the meditation user has the feeling of being personally on the scene, and the substitution feeling of the meditation user is further increased.
In one implementation manner, the determining the background sound type according to the adjusted category of each dynamic object specifically includes:
according to the adjusted categories of the dynamic objects, sound data corresponding to the categories are obtained, wherein the sound data corresponding to the categories are predetermined;
acquiring sound wave data corresponding to each sound data, and fusing the sound wave data to obtain fused sound wave data;
and determining the background sound type at the current moment according to the fused sound wave data.
Specifically, each dynamic object has associated stored therein specific sound data. In order to perform sound fusion to obtain composite sound, the embodiment needs to acquire sound data of various dynamic objects determined at present, and then convert the sound data into sound wave data. And merging the sound wave data into one piece of waveform data to obtain merged sound wave data. And the fusion sound wave data are fused with the waveform characteristics of each sound wave data, so that the fusion sound wave data are used as composite sound, and the background sound type at the current moment is determined according to the fusion sound wave data.
In one implementation, the method further comprises:
step S20, acquiring a plurality of preset filter parameters, wherein each filter parameter corresponds to a different relaxation degree interval;
s21, determining target filter parameters from the filter parameters according to the relaxation degree at the current moment;
and S22, displaying the dynamic image at the current moment according to the target filter parameters.
Specifically, the difference of the filter parameters causes the dynamic image to the meditation user to select a target filter parameter corresponding to the relaxation degree of the current moment from a plurality of predetermined filter parameters with different look and feel, and controls the image display according to the target filter parameter, thereby realizing that the image style changes along with the relaxation degree of the meditation user. The meditation user can more earnestly feel the change of the current concentration degree of himself through the change of the style of the image, thereby generating a more intense substitution feeling.
As shown in fig. 1, the method further includes:
and S500, determining the dynamic image at the current moment according to the adjusted dynamic image, and playing the dynamic image at the current moment and the background sound to the meditation user.
Specifically, the dynamic image in this embodiment is used to record the psychological change of the meditation user during the training process, and besides storing the dynamic image for later use in the training process, the dynamic image and the background sound corresponding to the dynamic image are also played in real time, so that the meditation user can observe the psychological change of the meditation user in real time during the training process.
In one implementation, the method further comprises:
step S30, acquiring a preset relaxation degree threshold value, and judging whether the relaxation degree at the current moment reaches the relaxation degree threshold value or not;
step S31, judging that the meditation user enters a state of being in a definite state if the relaxation degree at the current moment reaches the relaxation degree threshold value;
step S32, adjusting all the dynamic objects in the dynamic image at the current time to be static.
Specifically, the embodiment presets a threshold of relaxation degree, when the relaxation degree of the meditation user reaches the threshold of relaxation degree, the current concentration degree of the meditation user is higher, the mood fluctuation is smaller, and the meditation user is judged to enter the state of establishment. Since the mind of the meditation user is calm in the fixed state, all the moving objects in the moving image at the current time are adjusted to be stationary accordingly. The meditation user can know the time of entering the fixed state and the maintaining time of the fixed state by checking the dynamic images at each moment later, and the meditation user is helped to review the disc-copying training process.
In one implementation, the method further comprises:
step S40, acquiring the continuous duration of the meditation user entering the state and a preset duration threshold;
step S41, judging whether the continuous time length reaches the time length threshold value or not;
and step S42, displaying the reminding information for stopping meditation on the dynamic image at the current moment when the continuous time length reaches the time length threshold value.
In particular, since meditation users generally use a sitting posture, sedentary sitting has an influence on health, the present embodiment sets a threshold of time length in advance for health purposes. When the continuous time length of the meditation user entering the state of establishment reaches a time length threshold value, the meditation user finishes meditation training today, and reminding information for stopping meditation is displayed on a current interface. The reminding information can be in a voice form or a text form.
In one implementation, the method further comprises:
step S50, acquiring all the dynamic images generated from the initial time to the time period when the meditation user enters a fixed state;
step S51, generating meditation showing video of the meditation user according to all the dynamic images.
Specifically, in order to facilitate the meditation user's multiple-disc training process, the present embodiment may integrate all the dynamic images generated in the meditation training process into one video, i.e., obtain meditation display video. The meditation user can visually understand the psychological change/concentration change of himself/herself by the change of moving and static objects, the change of image style and the change of background sound in the meditation presentation video. In addition, the meditation user can pertinently make the next training plan according to the meditation display video, so that a better training effect is obtained.
Based on the above embodiment, the present invention further provides a dynamic image generating and playing device based on meditation training, as shown in fig. 2, the device includes:
an obtaining module 01, configured to obtain a degree of relaxation of a meditation user at a previous time, a dynamic image, and the degree of relaxation at a current time, where the degree of relaxation is used to reflect a degree that the meditation user is focused on meditation, the dynamic image at the previous time includes a foreground image and a background image, and the foreground image includes a plurality of dynamic objects;
an adjustment module 02, configured to adjust a frame ratio of the foreground image and the background image according to a difference value between the relaxation degrees at a current time and a previous time;
according to the adjusted picture duty ratio, the number and the category of each dynamic object in the foreground image are adjusted;
determining a background sound type according to the category of each adjusted dynamic object, determining a background sound volume according to the number of each adjusted dynamic object, and determining a background sound at the current moment according to the background sound type and the background sound volume;
and the playing module 03 is configured to determine the moving image at the current time according to the adjusted moving image, and play the moving image at the current time and the background sound to the meditation user.
Based on the above embodiment, the present invention also provides a terminal, and a functional block diagram thereof may be shown in fig. 3. The terminal comprises a processor, a memory, a network interface and a display screen which are connected through a system bus. Wherein the processor of the terminal is adapted to provide computing and control capabilities. The memory of the terminal includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the terminal is used for communicating with an external terminal through a network connection. The computer program, when executed by the processor, implements a method of generating and playing dynamic images based on meditation training. The display screen of the terminal may be a liquid crystal display screen or an electronic ink display screen.
It will be appreciated by those skilled in the art that the functional block diagram shown in fig. 3 is merely a block diagram of some of the structures associated with the present inventive arrangements and is not limiting of the terminal to which the present inventive arrangements may be applied, and that a particular terminal may include more or less components than those shown, or may combine some of the components, or have a different arrangement of components.
In one implementation, the memory of the terminal has stored therein one or more programs, and the execution of the one or more programs by one or more processors includes instructions for performing meditation training-based methods of generation and playback of dynamic images.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
In summary, the invention discloses a method for generating and playing dynamic images based on meditation training, which comprises the following steps: acquiring the degree of relaxation of a meditation user at the previous moment, a dynamic image and the degree of relaxation at the current moment, wherein the degree of relaxation is used for reflecting the degree of concentration of the meditation user on meditation, the dynamic image at the previous moment comprises a foreground image and a background image, and the foreground image comprises a plurality of dynamic objects; according to the difference value of the relaxation degrees of the current moment and the previous moment, adjusting the picture occupation ratio of the foreground image and the background image; according to the adjusted picture duty ratio, the number and the category of each dynamic object in the foreground image are adjusted; determining a background sound type according to the category of each adjusted dynamic object, determining a background sound volume according to the number of each adjusted dynamic object, and determining a background sound at the current moment according to the background sound type and the background sound volume; and determining the dynamic image at the current moment according to the regulated dynamic image, and playing the dynamic image at the current moment and the background sound to the meditation user. The invention generates the dynamic image and background sound of each moment in the meditation training process according to the variation of the degree of looseness of meditation users at the moment and the moment before, and the degree of looseness can reflect the current concentration of meditation users, so the generated dynamic image and background sound are equivalent to the mental variation of the meditation users from the moment before to the moment before being recorded, and the recording mode of the graph-sound combination is easier for the meditation users to generate substitution feeling, thereby helping the meditation users to know the current meditation state of themselves and helping the meditation users to resume the training process later. The problem that the prior art lacks a recording method of mental change of meditation users, and the meditation users are hard to recover the training process only by means of the memory of the meditation users is solved, so that the meditation training effect is affected.
It is to be understood that the invention is not limited in its application to the examples described above, but is capable of modification and variation in light of the above teachings by those skilled in the art, and that all such modifications and variations are intended to be included within the scope of the appended claims.

Claims (8)

1. A method for generating and playing dynamic images based on meditation training, the method comprising:
acquiring the degree of relaxation of a meditation user at the previous moment, a dynamic image and the degree of relaxation at the current moment, wherein the degree of relaxation is used for reflecting the degree of concentration of the meditation user on meditation, the dynamic image at the previous moment comprises a foreground image and a background image, and the foreground image comprises a plurality of dynamic objects;
according to the difference value of the relaxation degrees of the current moment and the previous moment, adjusting the picture occupation ratio of the foreground image and the background image;
according to the adjusted picture duty ratio, the number and the category of each dynamic object in the foreground image are adjusted;
determining a background sound type according to the category of each adjusted dynamic object, determining a background sound volume according to the number of each adjusted dynamic object, and determining a background sound at the current moment according to the background sound type and the background sound volume;
determining the dynamic image at the current moment according to the adjusted dynamic image, and playing the dynamic image at the current moment and the background sound to the meditation user;
the method for acquiring the looseness at each moment comprises the following steps:
acquiring electroencephalogram data and action data of the meditation user at the moment;
determining the degree of relaxation at that time according to the electroencephalogram data and the action data;
the method further comprises the steps of:
acquiring a plurality of preset filter parameters, wherein each filter parameter corresponds to a different relaxation degree interval;
determining a target filter parameter from the filter parameters according to the relaxation degree at the current moment;
displaying the dynamic image at the current moment according to the target filter parameters;
the method further comprises the steps of: determining the current wave band lengths corresponding to the distraction wave, the attention wave and the tension wave of the meditation user according to the electroencephalogram data; the method comprises the steps of obtaining the original wave band lengths corresponding to three wave bands respectively, wherein the original wave band lengths are wave band lengths collected by meditation users before training starts; determining the length variation corresponding to the three wave bands according to the current wave band length and the original wave band length corresponding to the three wave bands respectively; determining the corresponding relaxation degree of the meditation user according to the length variation corresponding to the three wave bands respectively;
the adjusting the number and the category of each dynamic object in the foreground image according to the adjusted picture duty ratio specifically includes: acquiring an image dividing line corresponding to the foreground image and the background image at the previous moment, wherein one side of the image dividing line is the dynamic object, and the other side of the image dividing line is the static object; according to the adjusted picture duty ratio, adjusting the position of the image dividing line; according to the adjusted image dividing line, adjusting the dynamic and static states of objects at two sides of the image dividing line;
the background sound type is a composite sound of object sounds corresponding to the dynamic objects respectively; the determining the background sound type according to the category of each adjusted dynamic object specifically comprises the following steps: according to the adjusted categories of the dynamic objects, sound data corresponding to the categories are obtained, wherein the sound data corresponding to the categories are predetermined; acquiring sound wave data corresponding to each sound data, and fusing the sound wave data to obtain fused sound wave data; and determining the background sound type at the current moment according to the fused sound wave data.
2. The meditation training-based moving image generation and playback method according to claim 1, wherein, if a current time is an initial time at which the meditation training is performed by the meditation user, the moving image acquisition method of the initial time includes:
acquiring the environment information of the meditation user and the degree of relaxation at the initial time;
determining an initial picture duty ratio according to the relaxation degree at the initial moment;
determining target theme scenes according to the environment information, wherein different theme scenes respectively correspond to different types of static objects and dynamic objects;
and according to the initial picture duty ratio, arranging all static objects and all dynamic objects corresponding to the target theme scene to obtain the dynamic image at the initial moment.
3. The meditation training-based dynamic image generation and playback method according to claim 2, characterized in that the method further comprises:
acquiring a preset relaxation degree threshold value, and judging whether the relaxation degree at the current moment reaches the relaxation degree threshold value or not;
judging that the meditation user enters a fixed state if the relaxation degree at the current moment reaches the relaxation degree threshold value;
and all the dynamic objects in the dynamic image at the current moment are adjusted to be static.
4. A method of generating and playing back dynamic images based on meditation training according to claim 3, characterized in that the method further comprises:
acquiring the continuous time length of the meditation user entering the fixed state and a preset time length threshold;
judging whether the continuous time length reaches the time length threshold value or not;
and displaying reminding information for stopping meditation on the dynamic image at the current moment when the continuous duration reaches the duration threshold.
5. A method of generating and playing back dynamic images based on meditation training according to claim 3, characterized in that the method further comprises:
acquiring all the dynamic images generated from the initial time to the time period when the meditation user enters a fixed state;
and generating meditation display video of the meditation user according to all the dynamic images.
6. A meditation training-based dynamic image generation and playback apparatus, characterized in that the apparatus comprises:
the system comprises an acquisition module, a storage module and a storage module, wherein the acquisition module is used for acquiring the degree of relaxation of a meditation user at the previous moment, a dynamic image and the degree of relaxation at the current moment, wherein the degree of relaxation is used for reflecting the degree that the meditation user is focused on meditation, the dynamic image at the previous moment comprises a foreground image and a background image, and the foreground image comprises a plurality of dynamic objects;
the adjustment module is used for adjusting the picture duty ratio of the foreground image and the background image according to the difference value of the relaxation degrees of the current moment and the previous moment;
according to the adjusted picture duty ratio, the number and the category of each dynamic object in the foreground image are adjusted;
determining a background sound type according to the category of each adjusted dynamic object, determining a background sound volume according to the number of each adjusted dynamic object, and determining a background sound at the current moment according to the background sound type and the background sound volume;
the playing module is used for determining the dynamic image at the current moment according to the adjusted dynamic image and playing the dynamic image at the current moment and the background sound to the meditation user;
the method for acquiring the looseness at each moment comprises the following steps:
acquiring electroencephalogram data and action data of the meditation user at the moment;
determining the degree of relaxation at that time according to the electroencephalogram data and the action data;
the method further comprises the steps of:
acquiring a plurality of preset filter parameters, wherein each filter parameter corresponds to a different relaxation degree interval;
determining a target filter parameter from the filter parameters according to the relaxation degree at the current moment;
displaying the dynamic image at the current moment according to the target filter parameters;
the method further comprises the steps of: determining the current wave band lengths corresponding to the distraction wave, the attention wave and the tension wave of the meditation user according to the electroencephalogram data; the method comprises the steps of obtaining the original wave band lengths corresponding to three wave bands respectively, wherein the original wave band lengths are wave band lengths collected by meditation users before training starts; determining the length variation corresponding to the three wave bands according to the current wave band length and the original wave band length corresponding to the three wave bands respectively; determining the corresponding relaxation degree of the meditation user according to the length variation corresponding to the three wave bands respectively;
the adjusting the number and the category of each dynamic object in the foreground image according to the adjusted picture duty ratio specifically includes: acquiring an image dividing line corresponding to the foreground image and the background image at the previous moment, wherein one side of the image dividing line is the dynamic object, and the other side of the image dividing line is the static object; according to the adjusted picture duty ratio, adjusting the position of the image dividing line; according to the adjusted image dividing line, adjusting the dynamic and static states of objects at two sides of the image dividing line;
the background sound type is a composite sound of object sounds corresponding to the dynamic objects respectively; the determining the background sound type according to the category of each adjusted dynamic object specifically comprises the following steps: according to the adjusted categories of the dynamic objects, sound data corresponding to the categories are obtained, wherein the sound data corresponding to the categories are predetermined; acquiring sound wave data corresponding to each sound data, and fusing the sound wave data to obtain fused sound wave data; and determining the background sound type at the current moment according to the fused sound wave data.
7. A terminal comprising a memory and one or more processors; the memory stores more than one program; the program contains instructions for executing the meditation training-based moving image generation and playback method according to any one of claims 1 to 5; the processor is configured to execute the program.
8. A computer readable storage medium having stored thereon a plurality of instructions adapted to be loaded and executed by a processor for carrying out the steps of the meditation training based dynamic image generation and play method according to any of the preceding claims 1-5.
CN202211704792.1A 2022-12-29 2022-12-29 Dynamic image generation and playing method based on meditation training Active CN115662575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211704792.1A CN115662575B (en) 2022-12-29 2022-12-29 Dynamic image generation and playing method based on meditation training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211704792.1A CN115662575B (en) 2022-12-29 2022-12-29 Dynamic image generation and playing method based on meditation training

Publications (2)

Publication Number Publication Date
CN115662575A CN115662575A (en) 2023-01-31
CN115662575B true CN115662575B (en) 2023-06-06

Family

ID=85023047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211704792.1A Active CN115662575B (en) 2022-12-29 2022-12-29 Dynamic image generation and playing method based on meditation training

Country Status (1)

Country Link
CN (1) CN115662575B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113552946A (en) * 2021-07-21 2021-10-26 浙江强脑科技有限公司 Meditation training method, device, terminal and medium based on intelligent wearable device
WO2022234934A1 (en) * 2021-05-06 2022-11-10 안형철 Vr meditation system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200030571A1 (en) * 2018-07-25 2020-01-30 James Gilbert Esch Duran Cyclical Visual Effect To Assist Meditation And Mindfulness
CN112905015B (en) * 2021-03-08 2023-06-06 华南理工大学 Meditation training method based on brain-computer interface
CN113457135A (en) * 2021-06-29 2021-10-01 网易(杭州)网络有限公司 Display control method and device in game and electronic equipment
CN114625301A (en) * 2022-05-13 2022-06-14 厚德明心(北京)科技有限公司 Display method, display device, electronic equipment and storage medium
CN115206489A (en) * 2022-07-20 2022-10-18 上海暖禾脑科学技术有限公司 Meditation training method and device based on nerve feedback system and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022234934A1 (en) * 2021-05-06 2022-11-10 안형철 Vr meditation system
CN113552946A (en) * 2021-07-21 2021-10-26 浙江强脑科技有限公司 Meditation training method, device, terminal and medium based on intelligent wearable device

Also Published As

Publication number Publication date
CN115662575A (en) 2023-01-31

Similar Documents

Publication Publication Date Title
Antony et al. Behavioral, physiological, and neural signatures of surprise during naturalistic sports viewing
Marin et al. Berlyne revisited: Evidence for the multifaceted nature of hedonic tone in the appreciation of paintings and music
Swallow et al. Changes in events alter how people remember recent information
CN111669653A (en) Playing control method, device, storage medium and television
Calbi et al. How context influences the interpretation of facial expressions: a source localization high-density EEG study on the “Kuleshov effect”
Vetter et al. Ongoing neural development of affective theory of mind in adolescence
WO2014085910A1 (en) System and method for enhancing content using brain-state data
Marin et al. Effects of presentation duration on measures of complexity in affective environmental scenes and representational paintings
Sgouramani et al. “Flash” dance: How speed modulates perceived duration in dancers and non-dancers
Bangert et al. Crossing event boundaries changes prospective perceptions of temporal length and proximity
Krans et al. Count out your intrusions: Effects of verbal encoding on intrusive memories
Meitz et al. Event related message processing: Perceiving and remembering changes in films with and without soundtrack
CN112822550A (en) Television terminal adjusting method and device and television terminal
CN115662575B (en) Dynamic image generation and playing method based on meditation training
US20200073475A1 (en) Artificial intelligence assisted neurofeedback brain wave training techniques, systems, and methods
Sekeres et al. Reminders activate the prefrontal‐medial temporal cortex and attenuate forgetting of event memory
Huotilainen et al. Long-term memory traces facilitate short-term memory trace formation in audition in humans
Sava-Segal et al. Individual differences in neural event segmentation of continuous experiences
Goh et al. The perception of silence
CN109688264B (en) Electronic equipment display state adjusting method and device and storage medium
Liu et al. Neutral mood induction during reconsolidation reduces accuracy, but not vividness and anxiety of emotional episodic memories
Ludwig et al. The effect of cutting rates on the liking of live sports broadcasts
Baumann Auditory-induced negative emotions increase recognition accuracy for visual scenes under conditions of high visual interference
Martarelli et al. Time in suspense: investigating boredom and related states in a virtual waiting room
Steffens et al. The effect of inattention and cognitive load on unpleasantness judgments of environmental sounds

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant