CN115662575A - Dynamic image generation and playing method based on meditation training - Google Patents

Dynamic image generation and playing method based on meditation training Download PDF

Info

Publication number
CN115662575A
CN115662575A CN202211704792.1A CN202211704792A CN115662575A CN 115662575 A CN115662575 A CN 115662575A CN 202211704792 A CN202211704792 A CN 202211704792A CN 115662575 A CN115662575 A CN 115662575A
Authority
CN
China
Prior art keywords
meditation
dynamic
image
user
degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211704792.1A
Other languages
Chinese (zh)
Other versions
CN115662575B (en
Inventor
韩璧丞
苏度
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mental Flow Technology Co Ltd
Original Assignee
Shenzhen Mental Flow Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mental Flow Technology Co Ltd filed Critical Shenzhen Mental Flow Technology Co Ltd
Priority to CN202211704792.1A priority Critical patent/CN115662575B/en
Publication of CN115662575A publication Critical patent/CN115662575A/en
Application granted granted Critical
Publication of CN115662575B publication Critical patent/CN115662575B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a method for generating and playing dynamic images based on meditation training, which aims at each moment in the meditation training process, and generates the dynamic images and background sounds at the moment according to the change of the looseness of a meditation user at the moment and the previous moment. The problem of lack of record method of the psychological change of the meditation user in the prior art, rely on the memory of the meditation user to be difficult to resume the training process, thus influence the effect of the meditation training is solved.

Description

Dynamic image generation and playing method based on meditation training
Technical Field
The invention relates to the field of image processing, in particular to a method for generating and playing dynamic images based on meditation training.
Background
The meditation training process mainly comprises the autonomous control of a user on a psychological process, a recording method of psychological changes of the user in the training process is lacked at present, and the meditation training process is difficult to repeat only by depending on the memory of the meditation user, so that the meditation training effect is influenced.
Thus, there is still a need for improvement and development of the prior art.
Disclosure of Invention
The present invention is directed to provide a method for generating and playing a dynamic image based on meditation training, which is directed to solve the problem that the effect of meditation training is affected due to the difficulty in a repeated meditation training process only by the memory of a meditation user, which is caused by the lack of a method for recording psychological changes of the meditation user in the prior art.
The technical scheme adopted by the invention for solving the problems is as follows:
in a first aspect, an embodiment of the present invention provides a method for generating and playing a dynamic image based on meditation training, wherein the method includes:
acquiring the release degree of a meditation user at the previous moment, a dynamic image and the release degree of the meditation user at the current moment, wherein the release degree is used for reflecting the degree of the meditation user concentrating on the meditation, the dynamic image at the previous moment comprises a foreground image and a background image, and the foreground image comprises a plurality of dynamic objects;
adjusting the picture ratio of the foreground image and the background image according to the difference value of the releasing degrees at the current moment and the previous moment;
adjusting the quantity and the category of each dynamic object in the foreground image according to the adjusted picture proportion;
determining the type of background sound according to the adjusted category of each dynamic object, determining the volume of the background sound according to the adjusted number of each dynamic object, and determining the background sound at the current moment according to the type of the background sound and the volume of the background sound;
and determining the dynamic image at the current moment according to the adjusted dynamic image, and playing the dynamic image and the background sound at the current moment to the meditation users.
In one embodiment, if the current time is an initial time for the meditation user to perform the meditation training, the method for acquiring the dynamic image at the initial time includes:
acquiring environmental information of the meditation user and the degree of relaxation at an initial time;
determining the initial picture ratio according to the releasing degree at the initial moment;
determining a target theme scene according to the environment information, wherein different theme scenes respectively correspond to different types of static objects and dynamic objects;
and arranging the static objects and the dynamic objects corresponding to the target theme scene according to the initial picture proportion to obtain the dynamic image at the initial moment.
In one embodiment, the method for obtaining the degree of looseness at each moment comprises the following steps:
acquiring electroencephalogram data and action data of the meditation user at the moment;
and determining the relaxation degree at the moment according to the electroencephalogram data and the motion data.
In one embodiment, the method further comprises:
acquiring a plurality of preset filter parameters, wherein each filter parameter corresponds to different relaxation intervals respectively;
determining target filter parameters from the filter parameters according to the release degree at the current moment;
and displaying the dynamic image at the current moment according to the target filter parameter.
In one embodiment, the method further comprises:
acquiring a preset releasing degree threshold value, and judging whether the releasing degree at the current moment reaches the releasing degree threshold value;
if the releasing degree at the current moment reaches the releasing degree threshold value, judging that the meditation user enters an entering state;
and all the dynamic objects in the dynamic image at the current moment are adjusted to be static.
In one embodiment, the method further comprises:
acquiring continuous time length for the meditation user to enter a fixed state and a preset time length threshold value;
judging whether the continuous time length reaches the time length threshold value or not;
and if the continuous time reaches the time threshold, displaying the reminding information for stopping meditation on the dynamic image at the current moment.
In one embodiment, the method further comprises:
acquiring all the dynamic images generated from the initial moment to the time when the meditation user enters the entering state;
and generating a meditation showing video of the meditation user according to all the dynamic images.
In a second aspect, the present invention further provides a device for generating and playing dynamic images based on meditation training, wherein the device includes:
an obtaining module, configured to obtain a previous time release of a meditation user, a dynamic image, and the current time release, where the previous time release is used to reflect a degree of concentration of the meditation user on the meditation, the previous time dynamic image includes a foreground image and a background image, and the foreground image includes a plurality of dynamic objects;
the adjusting module is used for adjusting the picture proportion of the foreground image and the background image according to the difference value of the releasing degrees at the current moment and the previous moment;
adjusting the quantity and the category of each dynamic object in the foreground image according to the adjusted picture proportion;
determining the type of background sound according to the category of each adjusted dynamic object, determining the volume of the background sound according to the number of each adjusted dynamic object, and determining the background sound at the current moment according to the type of the background sound and the volume of the background sound;
the playing module is used for determining the dynamic image at the current moment according to the adjusted dynamic image and playing the dynamic image and the background sound at the current moment to the meditation user.
In a third aspect, an embodiment of the present invention further provides a terminal, where the terminal includes a memory and more than one processor; the memory stores more than one program; the program comprises instructions for carrying out the meditation training-based dynamic image generation and playing method as described in any one of the above; the processor is configured to execute the program.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a plurality of instructions are stored, wherein the instructions are adapted to be loaded and executed by a processor to implement any of the above-mentioned steps of the method for generating and playing dynamic images based on meditation training.
The invention has the beneficial effects that: the embodiment of the invention generates the dynamic image and the background sound of the moment according to the change of the releasing degree of the meditation user at the moment and the previous moment in the meditation training process, the releasing degree can reflect the degree of the current concentration of the meditation user on the meditation, so the generated dynamic image and the generated background sound are equivalent to the record of the psychological change from the previous moment to the current moment of the meditation user, and the recording mode of combining the image and the sound can more easily lead the meditation user to generate substitution feeling, thereby being beneficial to the meditation user to know the current meditation state and helping the meditation user to subsequently repeat the training process. The problem of lack of record method of the psychological change of the meditation user in the prior art, rely on the memory of the meditation user to be difficult to resume the training process, thus influence the effect of the meditation training is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flow diagram illustrating a method for generating and playing dynamic images based on meditation training according to an embodiment of the present invention.
Fig. 2 is a schematic block diagram of a meditation training-based dynamic image generation and playing device according to an embodiment of the present invention.
Fig. 3 is a schematic block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
The invention discloses a method for generating and playing dynamic images based on meditation training, and in order to make the purpose, technical scheme and effect of the invention clearer and clearer, the invention is further described in detail by referring to the attached drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those skilled in the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
In view of the above-mentioned drawbacks of the prior art, the present invention provides a meditation training based dynamic image generation and playing method, comprising: acquiring the release degree of a meditation user at the previous moment, a dynamic image and the release degree of the meditation user at the current moment, wherein the release degree is used for reflecting the degree of the meditation user concentrating on the meditation, the dynamic image at the previous moment comprises a foreground image and a background image, and the foreground image comprises a plurality of dynamic objects; adjusting the picture ratio of the foreground image and the background image according to the difference value of the releasing degrees at the current moment and the previous moment; adjusting the quantity and the category of each dynamic object in the foreground image according to the adjusted picture proportion; determining the type of background sound according to the category of each adjusted dynamic object, determining the volume of the background sound according to the number of each adjusted dynamic object, and determining the background sound at the current moment according to the type of the background sound and the volume of the background sound; and determining the dynamic image at the current moment according to the adjusted dynamic image, and playing the dynamic image and the background sound at the current moment to the meditation user. The invention aims at each time in the meditation training process, the dynamic image and the background sound at the time are generated according to the change of the relaxation degree of the meditation user at the time and the previous time, the relaxation degree can reflect the degree of the current concentration of the meditation user on the meditation, so the generated dynamic image and the generated background sound are equivalent to the record of the psychological change from the previous time to the current time of the meditation user, and the recording mode of combining images and sounds is easier for the meditation user to generate substitution feeling, thereby being beneficial to the meditation user to know the current meditation state of the meditation user and helping the meditation user to subsequently rerun the training process. The problem of lack of record method of the psychological change of the meditation user in the prior art, rely on the memory of the meditation user to be difficult to resume the training process, thus influence the effect of the meditation training is solved.
As shown in fig. 1, the method includes:
step S100, obtaining a relaxation degree of a meditation user at a previous time, a dynamic image and the relaxation degree of the present time, wherein the relaxation degree is used for reflecting the degree of the meditation user concentrating on the meditation, the dynamic image at the previous time comprises a foreground image and a background image, and the foreground image comprises a plurality of dynamic objects.
Specifically, the meditation user may be any one of the users who perform the meditation training. The embodiment can evaluate the concentration degree of the meditation user in real time during training, and obtains the concentration degree at each moment according to the concentration degree, wherein the higher the concentration degree is, the more calm the current mood of the meditation user is, the higher the concentration degree is during the meditation training. In order to record the psychological changes of the meditation user in the training process, the embodiment generates a dynamic image with background sound for each moment, and reflects the psychological state of the meditation user at the current moment through the dynamic image. Specifically, the moving image at each time is generated by adjusting the moving image at the previous time according to the change of the relaxation degree from the previous time to the current time, so that the psychological change of the meditation user can be known through the change of the moving images at the adjacent times. Compared with a numerical recording mode, the change of the dynamic image can reflect the mood fluctuation of the meditation user more easily, the meditation user has stronger substituting feeling, and the subsequent meditation user can be helped to reply the training process.
In one embodiment, if the current time is an initial time for the meditation user to perform the meditation training, the method for acquiring the dynamic image at the initial time includes:
step S10, acquiring environment information of the meditation user and the relaxation degree at an initial moment;
s11, determining an initial picture ratio according to the releasing degree at the initial time;
s12, determining a target theme scene according to the environment information, wherein different theme scenes respectively correspond to different types of static objects and dynamic objects;
and S13, arranging the static objects and the dynamic objects corresponding to the target theme scene according to the initial picture proportion to obtain the dynamic image at the initial moment.
When the current time is the first time of the meditation training, it means that the first moving image in the meditation training is to be generated at the current time. In order to further enhance the sense of substitution of the meditation user, the relevant factors of the present environment of the meditation user are presented in the dynamic image in the present embodiment. Specifically, the environment information of the meditation user is obtained, such as air temperature, illumination and other information, the most consistent target theme scene is screened out from a plurality of preset theme scenes according to the environment information, and then the categories of static objects and dynamic objects contained in the interface are determined according to the target theme scene. And determining initial positions corresponding to each static object and each dynamic object respectively according to the initial picture proportion set based on the initial looseness, and then arranging the objects to obtain a first dynamic image.
In another implementation, the method further comprises:
acquiring a scene switching instruction input by a user;
and replacing the theme scene according to the scene switching instruction, and replacing the category of each dynamic object in the foreground image and the category of each static object in the background image according to the replaced theme scene.
For example, the subject scene may be a forest scene, the corresponding static objects may include trees and flowers, and the dynamic objects may include birds and flowing water.
In one implementation, the method for obtaining the degree of looseness at each time includes:
step S101, acquiring electroencephalogram data and action data of the meditation user at the moment;
and S102, determining the relaxation degree at the moment according to the electroencephalogram data and the motion data.
Specifically, in the case where the meditation users are focused differently, the data characteristics of the electroencephalogram data thereof are different, and the characteristics of the body motion are also different, so that the degree of relaxation at the present time can be determined from the electroencephalogram data and the motion data of the meditation users.
In one implementation, the step S102 specifically includes:
determining the current wave band lengths corresponding to three wave bands of a heart wave (THETA), an attention wave (SMR) and a tension wave (Hi-beta) of a meditation user according to the electroencephalogram data;
acquiring original wave band lengths corresponding to the three wave bands respectively, wherein the original wave band lengths are the wave band lengths acquired by the meditation user before training;
determining length variable quantities corresponding to the three wave bands respectively according to the current wave band lengths and the original wave band lengths corresponding to the three wave bands respectively;
and determining the corresponding relaxation degree of the meditation user according to the length variation quantity respectively corresponding to the three wave bands.
Specifically, three bands of the distraction wave (THETA), the attention wave (SMR), and the tension wave (Hi-beta) are bands closely related to concentration of a person. In order to avoid the influence of the individual difference on the calculation result, the present embodiment does not directly determine the degree of looseness of the meditation user based on the band lengths of the three bands, but determines the degree of looseness of the meditation user by analyzing the amount of change in the length of the three bands from the time of the start of training to the present time.
As shown in fig. 1, the method further comprises:
s200, adjusting the picture proportion of the foreground image and the background image according to the difference value of the releasing degrees at the current moment and the previous moment;
specifically, the dynamic image in the present embodiment mainly includes two parts, one is a foreground image, and the other is a background image. The foreground image comprises a number of dynamic objects and the background image comprises a number of static objects. The picture proportion of the foreground image and the background image can directly influence the quantity proportion of the static object and the dynamic object, thereby forming different impressions for people. The variation of the concentration degree of the meditation user during training is quantified by acquiring the deviation of the relaxation degree of the current moment and the previous moment, and the picture ratio of the foreground image and the background image is adjusted by the deviation of the relaxation degree, so that the meditation user can have different impressions, and the psychological variation of the meditation user can be reflected vividly.
As shown in fig. 1, the method further comprises:
and step S300, adjusting the quantity and the category of each dynamic object in the foreground image according to the adjusted picture proportion.
Specifically, after the picture ratio is adjusted, the number and the category of the static objects and the dynamic objects need to be adjusted according to the picture ratio. For example, after the picture aspect ratio is adjusted, the interface area corresponding to the background image is increased, the number and the type of the static objects need to be increased, the number and the type of the dynamic objects need to be decreased, and an image according with the adjusted picture aspect ratio is presented by adjusting the ratio of the static objects to the dynamic objects.
In one implementation manner, each of the static objects and each of the dynamic objects respectively correspond to different categories of objects, and the step S300 specifically includes:
and acquiring image segmentation lines corresponding to the foreground image and the background image at the previous moment, wherein one side of each image segmentation line is the dynamic object, and the other side of each image segmentation line is the static object.
Adjusting the position of the image dividing line according to the adjusted picture ratio;
and adjusting the dynamic and static states of the objects on two sides of the image segmentation line according to the adjusted image segmentation line.
Specifically, in this embodiment, the moving and static states of the object on the interface are determined based on the image segmentation lines of the foreground image and the background image. Therefore, after the picture ratio is determined, the position of the image segmentation line is redetermined according to the picture ratio, and the dynamic and static states of the objects at two sides are adjusted according to the redetermined position of the image segmentation line, so that the presented dynamic image can be consistent with the adjusted picture ratio. It can be understood that objects of the same category located on different sides of the image dividing line have different states, for example, the small grass located on one side of the image dividing line is static, and the small grass located on the other side after the image dividing line is adjusted is dynamic and sways with the wind.
As shown in fig. 1, the method further comprises:
step S400, determining the type of background sound according to the adjusted category of each dynamic object, determining the volume of the background sound according to the adjusted quantity of each dynamic object, and determining the background sound at the current moment according to the type of the background sound and the volume of the background sound.
Specifically, each static object and each dynamic object on the dynamic image are different types of objects, and the dynamic objects are defined to have sound attributes in this embodiment, for example, a wave sound of a rolling sea wave and a sand sound of a leaf of a swaying tree. Therefore, the background sound type of the moving image at the current time needs to be determined based on the adjusted type of each moving object, that is, the background sound type is a composite sound of the object sound corresponding to each moving object. Furthermore, both the increase and decrease of the number of dynamic objects are related to the mood changes of the meditation users. The increase of the number of the dynamic objects shows that the concentration degree of the current meditation training of the meditation user is reduced, the mood is more fussy, and the volume of the background sound is increased so as to vividly show the current fussy state of the meditation user. The decrease of the number of the dynamic objects indicates that the concentration degree of the meditation user currently performing the meditation training is increased, and the background sound volume is decreased if the mood is calmer. The present embodiment adopts the combination of image and sound, which can make the meditation user feel personally on the scene, further increasing the substitution feeling of the meditation user.
In an implementation manner, the determining a background sound type according to the adjusted category of each dynamic object specifically includes:
acquiring sound data corresponding to each category according to the adjusted category of each dynamic object, wherein the sound data corresponding to each category is predetermined;
acquiring sound wave data corresponding to the sound data respectively, and fusing the sound wave data to obtain fused sound wave data;
and determining the background sound type at the current moment according to the fused sound wave data.
Specifically, each dynamic object stores specific sound data in association with the object. In order to perform sound fusion to obtain a composite sound, the present embodiment needs to acquire sound data of various types of currently determined dynamic objects, and then convert the sound data into sound wave data. And (4) fusing the sound wave data into a piece of waveform data to obtain fused sound wave data. Because the fused sound wave data fuses the waveform characteristics of each sound wave data, the fused sound wave data is used as a composite sound, and the type of the background sound at the current moment is determined according to the fused sound wave data.
In one implementation, the method further comprises:
s20, acquiring a plurality of preset filter parameters, wherein each filter parameter corresponds to different relaxation intervals;
s21, determining target filter parameters from all the filter parameters according to the release degree at the current moment;
and S22, displaying the dynamic image at the current moment according to the target filter parameters.
Specifically, the difference of the filter parameters causes the dynamic image to select a target filter parameter corresponding to the degree of looseness of the current time from a plurality of predetermined filter parameters with different appearances for the meditation user, and controls the image display according to the target filter parameter, thereby realizing that the image style is changed along with the change of the degree of looseness of the meditation user. The meditation users can more closely realize the change of the current concentration degree of the meditation users through the change of the image style, thereby generating stronger substitution feeling.
As shown in fig. 1, the method further comprises:
step S500, determining the dynamic image at the current moment according to the adjusted dynamic image, and playing the dynamic image and the background sound at the current moment to the meditation user.
Specifically, the dynamic image in this embodiment is used for recording the psychological changes of the meditation user in the training process, and besides being stored for later training process duplication, the dynamic image and the corresponding background sound can be played in real time, so that the meditation user can also observe the psychological changes of the meditation user in real time in the training process.
In one implementation, the method further comprises:
s30, acquiring a preset loosening threshold, and judging whether the loosening at the current moment reaches the loosening threshold;
step S31, if the releasing degree at the current moment reaches the releasing degree threshold value, judging that the meditation user enters an entering state;
step S32 is to adjust all the dynamic objects in the dynamic image at the current time to be static.
Specifically, a release threshold is preset in the embodiment, and when the release of the meditation user reaches the release threshold, it indicates that the present concentration of the meditation user is high and the mood is low, and it is determined that the meditation user enters the entering state. Since the mood of the meditation user is calm in the predetermined state, all the moving objects in the moving image at the present time are adjusted to be static. The meditation user can know the time of entering the entering state and the maintaining time of the entering state by looking at the dynamic images at all times, and the meditation user can be helped to perform a repeated training process.
In one implementation, the method further comprises:
step S40, acquiring the continuous time length for the meditation user to enter the fixed state and a preset time length threshold value;
s41, judging whether the continuous time length reaches the time length threshold value;
and step S42, displaying reminding information for stopping meditation on the dynamic image at the current moment when the continuous time length reaches the time length threshold value.
Specifically, since the meditation user generally adopts the sitting posture, and the sedentary posture has an influence on the health, the present embodiment sets a time length threshold value in advance for the health worried. When the continuous time length of the meditation user entering the entering state reaches the time length threshold value, the meditation user is shown to finish the meditation training of today, and the reminding information for stopping the meditation is displayed on the current interface. The reminding information can be in a voice form or a text form.
In one implementation, the method further comprises:
step S50, all the dynamic images generated from the initial time to the time when the meditation user enters the entrance state are obtained;
and a step S51 of generating a meditation display video of the meditation user from all the dynamic images.
Specifically, in order to facilitate the meditation user's reply training process, the present embodiment will integrate all the dynamic images generated during the meditation training process into one video, i.e., the meditation demonstration video. The meditation user can vividly know the psychological change/concentration degree change of the meditation user through the change of moving and static objects, the change of image styles and the change of background sounds in the meditation display video. In addition, the meditation users can also make the next training plan in a targeted way according to the meditation showing videos, so that a better training effect is obtained.
Based on the above embodiment, the present invention also provides a device for generating and playing dynamic images based on meditation training, as shown in fig. 2, the device comprising:
an obtaining module 01, configured to obtain a previous time release of a meditation user, a dynamic image, and the current time release, where the previous time release is used to reflect a degree of concentration of the meditation user on the meditation, the previous time dynamic image includes a foreground image and a background image, and the foreground image includes a plurality of dynamic objects;
the adjusting module 02 is configured to adjust the picture ratio between the foreground image and the background image according to the difference between the releasing degrees at the current time and the previous time;
adjusting the quantity and the category of each dynamic object in the foreground image according to the adjusted picture proportion;
determining the type of background sound according to the category of each adjusted dynamic object, determining the volume of the background sound according to the number of each adjusted dynamic object, and determining the background sound at the current moment according to the type of the background sound and the volume of the background sound;
the playing module 03 is configured to determine the dynamic image at the current time according to the adjusted dynamic image, and play the dynamic image and the background sound at the current time to the meditation user.
Based on the above embodiments, the present invention further provides a terminal, and a schematic block diagram thereof may be as shown in fig. 3. The terminal comprises a processor, a memory, a network interface and a display screen which are connected through a system bus. Wherein the processor of the terminal is configured to provide computing and control capabilities. The memory of the terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The network interface of the terminal is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a meditation training based dynamic image generation and playing method. The display screen of the terminal can be a liquid crystal display screen or an electronic ink display screen.
It will be appreciated by those skilled in the art that the block diagram of fig. 3 is only a block diagram of a part of the structure associated with the solution of the invention and does not constitute a limitation of the terminal to which the solution of the invention is applied, and that a specific terminal may comprise more or less components than those shown in the figure, or may combine some components, or have a different arrangement of components.
In one implementation, one or more programs are stored in the memory of the terminal and configured to be executed by one or more processors include instructions for performing the meditation-based dynamic image generation and playing method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In summary, the present invention discloses a method for generating and playing dynamic images based on meditation training, the method comprising: acquiring the release degree of a meditation user at the previous moment, a dynamic image and the release degree of the meditation user at the current moment, wherein the release degree is used for reflecting the degree of the meditation user concentrating on the meditation, the dynamic image at the previous moment comprises a foreground image and a background image, and the foreground image comprises a plurality of dynamic objects; adjusting the picture ratio of the foreground image and the background image according to the difference value of the releasing degrees at the current moment and the previous moment; adjusting the quantity and the category of each dynamic object in the foreground image according to the adjusted picture proportion; determining the type of background sound according to the category of each adjusted dynamic object, determining the volume of the background sound according to the number of each adjusted dynamic object, and determining the background sound at the current moment according to the type of the background sound and the volume of the background sound; and determining the dynamic image at the current moment according to the adjusted dynamic image, and playing the dynamic image and the background sound at the current moment to the meditation user. The invention aims at each time in the meditation training process, the dynamic image and the background sound at the time are generated according to the change of the relaxation degree of the meditation user at the time and the previous time, the relaxation degree can reflect the degree of the current concentration of the meditation user on the meditation, so the generated dynamic image and the generated background sound are equivalent to the record of the psychological change from the previous time to the current time of the meditation user, and the recording mode of combining images and sounds is easier for the meditation user to generate substitution feeling, thereby being beneficial to the meditation user to know the current meditation state of the meditation user and helping the meditation user to subsequently rerun the training process. The problem of lack of record method of the psychological change of the meditation user in the prior art, rely on the memory of the meditation user to be difficult to resume the training process, thus influence the effect of the meditation training is solved.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (10)

1. A method for generating and playing dynamic images based on meditation training, the method comprising:
acquiring the release degree of a meditation user at the previous moment, a dynamic image and the release degree of the meditation user at the current moment, wherein the release degree is used for reflecting the degree of the meditation user concentrating on the meditation, the dynamic image at the previous moment comprises a foreground image and a background image, and the foreground image comprises a plurality of dynamic objects;
adjusting the picture proportion of the foreground image and the background image according to the difference value of the releasing degrees at the current moment and the previous moment;
adjusting the quantity and the category of each dynamic object in the foreground image according to the adjusted picture proportion;
determining the type of background sound according to the category of each adjusted dynamic object, determining the volume of the background sound according to the number of each adjusted dynamic object, and determining the background sound at the current moment according to the type of the background sound and the volume of the background sound;
and determining the dynamic image at the current moment according to the adjusted dynamic image, and playing the dynamic image and the background sound at the current moment to the meditation users.
2. The meditation training-based moving image generating and playing method as claimed in claim 1, wherein if the current time is an initial time for the meditation user to perform meditation training, the method for acquiring the moving image at the initial time comprises:
acquiring environment information of the meditation user and the relaxation degree at an initial time;
determining the initial picture ratio according to the releasing degree at the initial moment;
determining a target theme scene according to the environment information, wherein different theme scenes respectively correspond to different types of static objects and dynamic objects;
and arranging the static objects and the dynamic objects corresponding to the target theme scene according to the initial picture proportion to obtain the dynamic image at the initial moment.
3. The meditation-training-based moving image generation and playing method as claimed in claim 1, wherein the method of obtaining the degree of relaxation at each time includes:
acquiring electroencephalogram data and action data of the meditation user at the moment;
and determining the relaxation degree at the moment according to the electroencephalogram data and the motion data.
4. The meditation training-based dynamic image generation and play method as claimed in claim 1, wherein the method further comprises:
acquiring a plurality of preset filter parameters, wherein each filter parameter corresponds to different relaxation intervals respectively;
determining target filter parameters from the filter parameters according to the release degree at the current moment;
and displaying the dynamic image at the current moment according to the target filter parameter.
5. The meditation training-based dynamic image generation and play method as claimed in claim 2, wherein the method further comprises:
acquiring a preset releasing degree threshold value, and judging whether the releasing degree at the current moment reaches the releasing degree threshold value;
if the release degree at the current moment reaches the release degree threshold value, judging that the meditation users enter an entrance state;
and all the dynamic objects in the dynamic image at the current moment are adjusted to be static.
6. The meditation training-based dynamic image generation and play method as claimed in claim 5, wherein the method further comprises:
acquiring continuous time length for the meditation user to enter a fixed state and a preset time length threshold value;
judging whether the continuous time length reaches the time length threshold value or not;
and if the continuous time reaches the time threshold, displaying the reminding information for stopping meditation on the dynamic image at the current moment.
7. The meditation training-based dynamic image generation and play method as claimed in claim 5, wherein the method further comprises:
acquiring all the dynamic images generated from the initial moment to the time when the meditation user enters the entering state;
and generating a meditation showing video of the meditation users according to all the dynamic images.
8. A meditation training-based moving image generation and playback apparatus, characterized in that the apparatus comprises:
an obtaining module, configured to obtain a previous time release of a meditation user, a dynamic image, and the current time release, where the previous time release is used to reflect a degree of concentration of the meditation user on the meditation, the previous time dynamic image includes a foreground image and a background image, and the foreground image includes a plurality of dynamic objects;
the adjusting module is used for adjusting the picture proportion of the foreground image and the background image according to the difference value of the releasing degrees at the current moment and the previous moment;
adjusting the quantity and the category of each dynamic object in the foreground image according to the adjusted picture proportion;
determining the type of background sound according to the category of each adjusted dynamic object, determining the volume of the background sound according to the number of each adjusted dynamic object, and determining the background sound at the current moment according to the type of the background sound and the volume of the background sound;
the playing module is used for determining the dynamic image at the current moment according to the adjusted dynamic image and playing the dynamic image and the background sound at the current moment to the meditation user.
9. A terminal, characterized in that the terminal comprises a memory and more than one processor; the memory stores more than one program; the program comprises instructions for carrying out the meditation training based dynamic image generation and playing method as claimed in any one of claims 1-7; the processor is configured to execute the program.
10. A computer readable storage medium having stored thereon a plurality of instructions adapted to be loaded and executed by a processor for performing the steps of the meditation training based dynamic image generation and playing method of any one of the preceding claims 1-7.
CN202211704792.1A 2022-12-29 2022-12-29 Dynamic image generation and playing method based on meditation training Active CN115662575B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211704792.1A CN115662575B (en) 2022-12-29 2022-12-29 Dynamic image generation and playing method based on meditation training

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211704792.1A CN115662575B (en) 2022-12-29 2022-12-29 Dynamic image generation and playing method based on meditation training

Publications (2)

Publication Number Publication Date
CN115662575A true CN115662575A (en) 2023-01-31
CN115662575B CN115662575B (en) 2023-06-06

Family

ID=85023047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211704792.1A Active CN115662575B (en) 2022-12-29 2022-12-29 Dynamic image generation and playing method based on meditation training

Country Status (1)

Country Link
CN (1) CN115662575B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200030571A1 (en) * 2018-07-25 2020-01-30 James Gilbert Esch Duran Cyclical Visual Effect To Assist Meditation And Mindfulness
CN112905015A (en) * 2021-03-08 2021-06-04 华南理工大学 Meditation training method based on brain-computer interface
CN113457135A (en) * 2021-06-29 2021-10-01 网易(杭州)网络有限公司 Display control method and device in game and electronic equipment
CN113552946A (en) * 2021-07-21 2021-10-26 浙江强脑科技有限公司 Meditation training method, device, terminal and medium based on intelligent wearable device
CN114625301A (en) * 2022-05-13 2022-06-14 厚德明心(北京)科技有限公司 Display method, display device, electronic equipment and storage medium
CN115206489A (en) * 2022-07-20 2022-10-18 上海暖禾脑科学技术有限公司 Meditation training method and device based on nerve feedback system and electronic equipment
WO2022234934A1 (en) * 2021-05-06 2022-11-10 안형철 Vr meditation system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200030571A1 (en) * 2018-07-25 2020-01-30 James Gilbert Esch Duran Cyclical Visual Effect To Assist Meditation And Mindfulness
CN112905015A (en) * 2021-03-08 2021-06-04 华南理工大学 Meditation training method based on brain-computer interface
WO2022234934A1 (en) * 2021-05-06 2022-11-10 안형철 Vr meditation system
CN113457135A (en) * 2021-06-29 2021-10-01 网易(杭州)网络有限公司 Display control method and device in game and electronic equipment
CN113552946A (en) * 2021-07-21 2021-10-26 浙江强脑科技有限公司 Meditation training method, device, terminal and medium based on intelligent wearable device
CN114625301A (en) * 2022-05-13 2022-06-14 厚德明心(北京)科技有限公司 Display method, display device, electronic equipment and storage medium
CN115206489A (en) * 2022-07-20 2022-10-18 上海暖禾脑科学技术有限公司 Meditation training method and device based on nerve feedback system and electronic equipment

Also Published As

Publication number Publication date
CN115662575B (en) 2023-06-06

Similar Documents

Publication Publication Date Title
CN109118562A (en) Explanation video creating method, device and the terminal of virtual image
Bennett et al. The ‘place’of television in celebrity studies
CN111669653A (en) Playing control method, device, storage medium and television
Konijn The role of emotion in media use and effects
CN108989705B (en) Video production method and device of virtual image and terminal
Marin et al. Effects of presentation duration on measures of complexity in affective environmental scenes and representational paintings
CN110458820A (en) A kind of multimedia messages method for implantation, device, equipment and storage medium
De Backer Blinded by the starlight: An evolutionary framework for studying celebrity culture and fandom
Bangert et al. Crossing event boundaries changes prospective perceptions of temporal length and proximity
Vartanian Conscious experience of pleasure in art
CN115662575A (en) Dynamic image generation and playing method based on meditation training
Balzarotti et al. The editing density of moving images influences viewers’ time perception: The mediating role of eye movements
CN111654752A (en) Multimedia information playing method, device and related equipment
DE102020112476A1 (en) Method, system, user device and computer program for operating a virtual personality
JP2014030657A (en) Stimulation inducing device, stimulation inducing method and program
US11461948B2 (en) System and method for voice driven lip syncing and head reenactment
CN115438246A (en) Content evaluation method and device, storage medium and electronic equipment
Goh et al. The perception of silence
Ludwig et al. The effect of cutting rates on the liking of live sports broadcasts
Bainbridge Memorability: Reconceptualizing memory as a visual attribute
Kim Bio‐signal‐processing‐based convolutional neural networks model for music program scene editing
Torben-Nielsen et al. Non-parametric algorithmic generation of neuronal morphologies
Kohara et al. The shot length styles of Miyazaki, Oshii, and Hosoda: A quantitative analysis
Lemmens et al. Investigating the Effect of Dappled Light Patterns on Preference and Psychological Restoration
US20230282243A1 (en) System and method for automatically preparing personalized video presentations

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant