CN112927665A - Authoring method, electronic device, and computer-readable storage medium - Google Patents

Authoring method, electronic device, and computer-readable storage medium Download PDF

Info

Publication number
CN112927665A
CN112927665A CN202110093846.4A CN202110093846A CN112927665A CN 112927665 A CN112927665 A CN 112927665A CN 202110093846 A CN202110093846 A CN 202110093846A CN 112927665 A CN112927665 A CN 112927665A
Authority
CN
China
Prior art keywords
music
time point
behavior
unconscious
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110093846.4A
Other languages
Chinese (zh)
Other versions
CN112927665B (en
Inventor
王杨
刘鹏
王佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Music Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Music Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Music Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202110093846.4A priority Critical patent/CN112927665B/en
Publication of CN112927665A publication Critical patent/CN112927665A/en
Application granted granted Critical
Publication of CN112927665B publication Critical patent/CN112927665B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0033Recording/reproducing or transmission of music for electrophonic musical instruments
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/40Rhythm
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/101Music Composition or musical creation; Tools or processes therefor

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Electrophonic Musical Instruments (AREA)

Abstract

The embodiment of the invention relates to the technical field of Internet and discloses an authoring method, electronic equipment and a computer readable storage medium. The creation method comprises the following steps: in the music playing process, acquiring the unconscious behavior of a target object and a corresponding time point; adjusting the characteristics of the music according to the unconscious behaviors and the time points; and acquiring the creative work according to the adjusted music. The creation method provided by the embodiment of the invention can enable the target object to unconsciously participate in the creation process, effectively improve the interactivity and the interestingness of the creation process, enable the created works to be more real and have stronger infectivity, and further improve the quality of the created works.

Description

Authoring method, electronic device, and computer-readable storage medium
Technical Field
The embodiment of the invention relates to the technical field of internet, in particular to an authoring method, electronic equipment and a computer-readable storage medium.
Background
Music is an art class which exists by virtue of sound waves and causes various emotional reactions and emotional experiences through human auditory organs, and human beings and even animals can generate various physiological reactions and psychological activities after hearing certain music, so that the emotion, the mind or the thought of the human beings or the animals are influenced, and the behavior of the human beings or the animals is influenced to a certain extent. Different types, different melodies, different timbres, and even music in different languages can produce a wide variety of responses for humans or animals. Besides being influenced by music, human beings can also carry out activities such as tenderness, perusal, imitation, mixing, collage, arrangement and the like on the music according to own feelings, namely, the re-creation of the music.
Disclosure of Invention
The embodiment of the invention aims to provide an authoring method, which can enable a target object to unconsciously participate in the authoring process, effectively improve the interactivity and the interestingness of the authoring process, enable the authored works to be more real and have stronger infectivity, and further improve the quality of the authored works.
In order to solve the above technical problem, an embodiment of the present invention provides an authoring method, including the following steps: in the music playing process, acquiring the unconscious behavior of a target object and a corresponding time point; adjusting the characteristics of the music according to the unconscious behaviors and the corresponding time points; and acquiring the creative work according to the adjusted music.
An embodiment of the present invention also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the authoring method described above.
Embodiments of the present invention also provide a computer-readable storage medium storing a computer program, which when executed by a processor implements the above-described authoring method.
According to the implementation mode of the invention, in the music playing process, the unconscious behaviors of the target object and the corresponding time point are obtained, and due to the unique rhythm of the music, the target object can make various unconscious behaviors along with the melody of the music in the music playing scene, and the unconscious behaviors can truly and accurately represent the interaction between the target object and the played music and reflect the feeling of the target object on the music. According to the unconscious behaviors and the time points, the characteristics of the music are adjusted, the unconscious behaviors made by the target object can participate in the re-creation of the music according to the corresponding time points, and the participation degree of the target object in the re-creation of the music is effectively improved. According to the adjusted music, the creative work is obtained, and considering that most related music re-creation processes are re-created by creators on the basis of original music according to self ideas, the whole re-creation process has strong subjectivity and purposiveness, and the real feeling of the creators on the music cannot be well and really reflected.
In addition, the unintentional behavior includes a behavior of a sound, and the time point includes a time point corresponding to the behavior of the sound; adjusting characteristics of the music according to the unconscious behavior and the time point, including: extracting a first music track corresponding to the music; establishing an empty audio track; wherein the time axis of the empty track is the same as the time axis of the first track; adding the sound to a target location in the empty audio track, obtaining a second audio track; wherein the target position is a time point corresponding to the behavior of the sound; adjusting a characteristic of the music based on the first audio track and the second audio track. The sound emitted by the target object can vividly reflect the real feeling of the target object, and the adjustment is carried out according to the sound emitted by the target object, so that the adjustment is more accurate, and the quality of the created works is further improved.
Additionally, the behavior of the sound includes behavior of beat sound; said adding said sound to a target location in said empty audio track, acquiring a second audio track, comprising: determining the strength of the beat sent by the target object; adjusting the volume of a preset sound effect audio according to the strength of the beat to obtain a sound effect audio corresponding to the behavior of the beat sound; adding a sound effect audio corresponding to the behavior of the beat sound to a target position in the empty audio track to obtain a second audio track; and the target position is a time point corresponding to the behavior of the beat sound. The sound effect audio corresponding to the beat sound is used, so that the interference of external noise does not exist, the sound effect quality of the acquired second audio track is higher, and the acquired creative work can show the real emotion of the target object better.
In addition, the acquiring the unconscious behavior and the corresponding time point of the target object in the music playing process includes: recording a video of a target object in the music playing process; acquiring the unconscious behaviors of the target object and corresponding time points according to the video; according to the music after the adjustment, acquire the creative work, include: according to the video with music after the adjustment obtains the creative work, combines the video of target object again on the basis of music after the adjustment, can make the creative work of obtaining more lively, and the content is abundanter, further promotes the quality of creative work.
Additionally, the characteristic of the music comprises volume, the involuntary behavior comprises an action, and the corresponding point in time comprises a point in time corresponding to the action; adjusting characteristics of the music according to the unconscious behavior and the time point, including: determining an emotion score corresponding to the action; and adjusting the volume of the music according to the emotion score corresponding to the action and the time point corresponding to the action. According to the change of the emotion scores, the volume of the music is adjusted, so that the volume of the created works can be matched with the real feeling of the target object, and the created works are more real and vivid.
In addition, the obtaining of the creative work according to the video and the adjusted music comprises: determining emotion scores corresponding to all time points of the music according to the emotion scores corresponding to the actions and the time points corresponding to the actions; intercepting a highlight video from the video according to a time point when the emotion score exceeds a preset emotion score threshold and a preset highlight video duration; and acquiring the creative work according to the wonderful instant video and the adjusted music. The creative work is obtained according to the wonderful instant video and the adjusted music, so that the creative work is more wonderful and beautiful.
In addition, the number of the target objects is n, and the n is an integer greater than 1; in the music playing process, acquiring the unconscious behavior and the corresponding time point of the target object comprises the following steps: cutting the music into n sections to obtain n sections of music fragments; wherein, n music pieces correspond to n target objects respectively; in the process of playing the n music segments, acquiring the unconscious behaviors and corresponding time points of the target object corresponding to the currently played music segment to obtain the unconscious behaviors and corresponding time points of the n target objects; said adjusting characteristics of said music according to said unconscious behavior and said corresponding point in time, comprising: and respectively adjusting the characteristics of the n music pieces according to the unconscious behaviors of the n target objects and the corresponding time points. A plurality of target objects participate in the creation process together, so that the interactivity and the interestingness of creation can be further improved.
In addition, the number of the target objects is n, and the n is an integer greater than 1; in the music playing process, acquiring the unconscious behavior and the corresponding time point of the target object comprises the following steps: acquiring n ranges corresponding to the music; wherein, n ranges respectively correspond to n target objects; in the music playing process, acquiring the unconscious behaviors and corresponding time points of n target objects; adjusting characteristics of the music according to the unconscious behavior and the time point, including: and adjusting the characteristics of the music of the n ranges according to the unconscious behaviors of the n target objects and the corresponding time points. And the music is jointly created by a plurality of target objects according to the n ranges corresponding to the music, so that the interactivity and the interestingness of creation can be further improved.
Drawings
One or more embodiments are illustrated by the corresponding figures in the drawings, which are not meant to be limiting.
FIG. 1 is a flow chart of an authoring method in accordance with a first embodiment of the present invention;
fig. 2 is a flowchart for adjusting characteristics of music according to unintentional behaviors and time points according to a first embodiment of the present invention;
FIG. 3 is a flowchart of sound addition to a target location in an empty track to obtain a second track, according to a first embodiment of the present invention;
FIG. 4 is a flow chart of an authoring method in accordance with a second embodiment of the present invention;
fig. 5 is a flowchart for adjusting characteristics of music according to an unintentional behavior and a corresponding time point according to a second embodiment of the present invention;
FIG. 6 is a flowchart for acquiring a composition according to a video and adjusted music according to a second embodiment of the present invention;
FIG. 7 is a flowchart of an authoring method in accordance with a third embodiment of the present invention;
FIG. 8 is a flowchart of an authoring method in accordance with a fourth embodiment of the present invention;
fig. 9 is a schematic structural diagram of an electronic apparatus according to a fifth embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
A first embodiment of the present invention relates to an authoring method applied to an electronic device; the electronic device may be a terminal or a server, and the electronic device in this embodiment and each of the following embodiments is described by taking the server as an example. The following describes the implementation details of the authoring method of this embodiment in detail, and the following is only provided for the convenience of understanding and is not necessary for implementing this embodiment.
Application scenarios for embodiments of the present invention may include, but are not limited to: any room equipped with a family music system, a music classroom, a singing hall, a pub of education institutions such as school kindergartens and the like, a gymnasium equipped with a background music system, a professional recording studio of a record company or an independent musician and the like.
The specific flow of the authoring method of the present embodiment may be as shown in fig. 1, and includes:
step 101, acquiring an unconscious behavior of a target object and a corresponding time point in the music playing process;
specifically, the server can acquire the unconscious behavior of the target object and the corresponding time point in real time in the music playing process. The unconscious behavior is an instinctive behavior which is not judged by subjective analysis. Due to the unique rhythm of the music, the target object can make various unconscious behaviors along with the melody of the music in the music playing scene, and the unconscious behaviors can truly and accurately represent the interaction between the target object and the played music and reflect the feeling of the target object on the music.
In a specific implementation, the server may determine music to be re-created, and send the music to the playback device for playing, where the playback device may be: the system comprises a sound box capable of being connected with the internet, a home radio, a mobile phone, a desktop computer, a notebook computer, a tablet computer, a professional sound playing and recording device and the like. Before playing, the server can also determine the identity of the target object, and in the process of playing music, the camera and/or the microphone can acquire the unconscious behaviors of the target object and the corresponding time point in real time. The camera and/or the microphone may be built-in to the playback device, or may be connected to the outside of the playback device. Wherein the corresponding time point, i.e. the time point corresponding to the unintentional behavior made by the target object, also corresponds to the time axis of the music, since the unintentional behavior of the target object is obtained during the whole music playing process. The target object is an individual without independent awareness, such as: children, cats, dogs, etc.
In one example, the server determines that the music to be re-created is a music nail, the target object is a puppy A, the server records the unconscious behaviors of the puppy A such as barking through a microphone in the playing process of the music nail, shoots the unconscious behaviors of the puppy A such as circling and running through a camera, and records the occurrence time points of the unconscious behaviors.
Such as: the puppy a barks at the 45 th second of music a playing, and the server can acquire the barking sound of the puppy a at the 45 th second of music a playing and record the corresponding time point as 45 seconds, and store the recording in a database inside the server.
In another example, the server determines that the music to be re-created is music B, the target object is child B, and during playing of music B, the server records unconscious behaviors such as shouting of child B through a microphone, shoots unconscious behaviors such as clapping hands and stomps feet of child B through a camera, and records the time points of occurrence of these unconscious behaviors.
Such as: the child B makes a clapping laugh in the 27 th second of the music B, the server can acquire the clapping action and the laugh sound made by the child B in the 27 th second of the music B, and records the corresponding time point as 27 seconds and stores the corresponding time point in a database inside the server.
Step 102, adjusting the characteristics of music according to the unconscious behaviors and the time points;
specifically, after the server obtains the unconscious behavior of the target object and the corresponding time point, the characteristics of the music can be adjusted according to the unconscious behavior and the time point.
In particular implementations, the characteristics of the music include the music's key, timbre, tempo, and sound, volume, and speed, etc.
In one example, the unintentional behavior of the target object includes an unintentional fool behavior, that is, the target object is not interested in the music to be composed, and the server turns down the tune in the music at the time point corresponding to the unintentional fool behavior of the target object according to the acquired time point corresponding to the unintentional fool behavior and the acquired time point corresponding to the unintentional fool behavior of the target object.
In one example, the unconscious behavior of the target object includes the behavior of the sound of the target object, i.e., the behavior of the target object making the sound, and the corresponding time point includes the time point corresponding to the behavior of the sound, and according to the unconscious behavior and the time point, the adjusting of the characteristics of the music can be realized by the sub-steps as shown in fig. 2, which are as follows:
a substep 201 of extracting a first music track corresponding to the music;
in particular, the server may extract a first music track corresponding to music.
In a particular implementation, the track may appear as parallel "tracks" of one strip as seen in the tuning software. Each track defines attributes of the track, such as the timbre, the timbre library, the number of channels, the input/output ports, the volume, etc., of the track.
In one example, the playing device plays music in motion Picture Experts Group Audio Layer III (MP 3) format, and the server can directly use the music in MP3 format as the first Audio track.
In another example, the playing device plays Music short Video (MV) of Music, and the server may determine a Video stream and an audio stream of the MV respectively, and extract the audio stream of the MV from the MV as the first track corresponding to the Music.
A sub-step 202 of creating an empty audio track;
in particular, the server may create an empty track after extracting the first track corresponding to music. Wherein the time axis of the empty track is the same as the time axis of the first track.
In a specific implementation, after the server acquires the first track, a new empty track can be created according to the time axis of the first track, and since the time axis of the new empty track is aligned with the time axis of the first track, the time point corresponding to the action of the target object to make sound also corresponds to the new empty track.
Substep 203, adding sound to a target position in an empty audio track, obtaining a second audio track;
in particular, the server, after establishing the null audio track, may add sound to the null audio track at the target location, acquiring the second audio track. Wherein, the target position is a time point corresponding to the behavior of the sound.
In a specific implementation, after the server creates the empty track, the server may fill the sound emitted by the target object into the target position in the newly created empty track at a time point corresponding to the action of the target object to emit the sound, so as to obtain the second track.
In one example, the music to be re-composed is music B, the target object is child B, child B utters laughter from 2 th to 4 th seconds, cry from 27 th to 31 th seconds, and talking from 125 th to 133 th seconds, the server obtains the time points corresponding to the three sections of sounds and the behavior of uttering the three sections of sounds, fills the laughter uttered by child B into the 2 nd to 4 th seconds of the empty track, fills the crying uttered by child B into the 27 th to 31 th seconds of the empty track, and fills the talking of child B into the 125 th to 133 th seconds of the empty track to obtain the second track.
In another example, the behavior of sound includes a behavior of beat sound, that is, the target object makes sound including the target object makes beat sound, and the time point corresponding to the behavior of sound includes a time point corresponding to the behavior of beat sound. Adding sound to a target position in an empty track, obtaining a second track may be achieved by the sub-steps shown in fig. 3, as follows:
a substep 301, determining the corresponding strength of the behavior of the beat sound;
specifically, after the server acquires the behavior of the beat sound and the time point corresponding to the behavior of the beat sound, the strength corresponding to the behavior of the beat sound, that is, the strength of the beat sent by the target object, can be determined. The behavior of the beat sound includes a behavior of the target object making a sound when tapping the playing device, a behavior of the target object making a hand beat making a sound, a behavior of the target object making a foot stomp making a sound, and the like.
In the specific implementation, a pressure sensor is mounted on the surface of the playing device, a pressure sensor can also be mounted on the ground where the target object stands, the portable pressure sensors can also be attached to both hands of the target object, and the server can acquire the force of slapping the playing device by the target object, the force of slapping the foot of the target object and the like in real time.
In one example, the music to be re-composed is "ringing a bell and jingle", the target object is a child C, the server determines that the child C makes a sound for slapping the playing device in 41 th second, and the server determines that the strength of the beat made by the child C in 41 th second is 5N through a pressure sensor on the playing device.
A substep 302, adjusting the volume of a preset sound effect audio according to the strength of the beat, and obtaining a sound effect audio corresponding to the behavior of the beat sound;
particularly, after the server determines the corresponding force of the beat sound, the volume of the preset sound effect audio frequency can be adjusted according to the corresponding force of the beat sound, and the sound effect audio frequency corresponding to the beat sound is obtained. The preset sound effect audio may be set by a person skilled in the art according to actual needs, and the embodiment of the present invention is not particularly limited thereto. The storage form of the sound effect audio can be a beat audio file.
In one example, the preset audio effect audio is a conga audio effect audio, the strength of the beat sent by the child C in the 12 th second is 5N, the strength of the beat sent by the 13 th second is 7N, the strength of the beat sent by the 14 th second is 6N, the strength of the beat sent by the 15 th second is 5N, the strength of the beat sent by the 16 th second is 4N, the volume of the music is 60dB, and the server determines that the volume of the audio effect audio corresponding to the behavior of the beat sound sent by the 12 th second is: 60+5 equals 65dB, and the sound volume of the effect audio corresponding to the behavior of the beat sound emitted in the 13 th second is: 60+7 is 67dB, and the sound volume of the effect audio corresponding to the behavior of the beat sound emitted in the 14 th second is: the behavior of the beat sound emitted at the 15 th second corresponds to the volume of the effect audio, which is 66dB for 60+ 6: 60+5 is 65dB, and the sound volume of the effect audio corresponding to the behavior of the beat sound emitted in the 16 th second is: 60+4 is 64 dB.
In one example, the server sets a volume adjustment upper limit value of the sound effect audio, and the sound effect audio corresponding to the emitted beat sound is less than or equal to the volume adjustment upper limit value of the sound effect audio. The hearing can be protected by setting the volume adjustment upper limit value of the sound effect audio.
Such as: the upper limit value of the volume adjustment of the sound effect audio is 10dB, the strength of the beat sent by the child C in the 165 th second is 12N, and the server determines that the volume of the sound effect audio corresponding to the beat sound sent in the 165 th second is: 60+10 to 70 dB.
A substep 303, adding a sound effect audio corresponding to the behavior of the beat sound to a target position in an empty audio track, and acquiring a second audio track;
specifically, after obtaining the audio effect audio corresponding to the behavior of the beat sound, the server may add the audio effect audio corresponding to the behavior of the beat sound to the target position in the empty audio track to obtain the second audio track. Wherein, the target position is a time point corresponding to the behavior of the beat sound. The sound effect audio corresponding to the behavior of the beat sound is used, so that the sound effect quality of the acquired second audio track can be higher, and the acquired creative work can show the real emotion of the target object better.
A substep 204 of adjusting characteristics of the music based on the first audio track and the second audio track;
specifically, after the server acquires the second audio track according to the behavior of the target object to make a sound and the time point corresponding to the behavior of the target object to make a sound, the server may adjust the characteristics of the music according to the first audio track and the second audio track.
In a specific implementation, after the server acquires the first audio track and the second audio track, the server may synthesize the two audio tracks according to the time axis of the first audio track and the time axis of the second audio track to adjust the characteristics of the music, and acquire a new audio file, that is, the adjusted music.
And 103, acquiring the creative work according to the adjusted music.
Specifically, the server may obtain the creative work according to the adjusted music after adjusting the characteristics of the music according to the unintentional behavior and the corresponding time point.
In specific implementation, after the server obtains the adjusted music, the server may perform operations such as noise filtering, reverberation adding, and tone quality improvement on the adjusted music to obtain a complete audio file, output the created works, and store the created works in a database inside the server or upload the created works to a preset cloud database.
In one example, after the server obtains the creative work, the creative work can be played immediately for subsequent repair, improvement, and the like.
According to the first embodiment of the invention, in the music playing process, the unconscious behaviors of the target object and the corresponding time point are obtained, and due to the unique rhythmicity of the music, the target object can make various unconscious behaviors along with the melody of the music in the music playing scene, and the unconscious behaviors can truly and accurately represent the interaction between the target object and the played music and reflect the feeling of the target object on the music. According to the unconscious behaviors and the time points, the characteristics of the music are adjusted, the unconscious behaviors made by the target object can participate in the re-creation of the music according to the corresponding time points, and the participation degree of the target object in the re-creation of the music is effectively improved. According to the adjusted music, the creative work is obtained, and considering that most related music re-creation processes are re-created by creators on the basis of original music according to self ideas, the whole re-creation process has strong subjectivity and purposiveness, and the real feeling of the creators on the music cannot be well and really reflected.
A second embodiment of the present invention relates to an authoring method, and the implementation details of the authoring method of the present embodiment are specifically described below, and the following are provided only for easy understanding of the implementation details and are not necessary for implementing the present solution, and fig. 4 is a schematic diagram of the authoring method of the second embodiment of the present invention, and includes:
step 401, recording a video of a target object in the process of playing music;
specifically, the server may record a video of the target object during the music playing.
In specific implementation, the server may call a camera and a microphone built in the playback device, or call a camera and a microphone connected to the playback device externally, and record and shoot the target object in the whole course of the music playback process to obtain a video corresponding to the target object.
Step 402, acquiring an unconscious behavior of a target object and a corresponding time point according to a video;
specifically, after the server acquires the video, the server may acquire an unconscious behavior of the target object and a corresponding time point according to the video.
Step 403, adjusting the characteristics of the music according to the unconscious behaviors and the time points;
in one example, the characteristic of the music includes a volume of the music, the unintentional behavior of the target object includes an action made by the target object, and the corresponding point in time includes a point in time corresponding to the action. Adjusting the characteristics of the music according to the unconscious behaviour and the corresponding points in time can be implemented by the sub-steps as shown in fig. 5, as follows:
4031, determining an emotion score corresponding to the action;
in particular, the server may determine an emotion score corresponding to the action.
In specific implementation, a plurality of standard actions and preset emotion scores corresponding to the standard actions are preset in the server, and the server can compare the actions made by the target object with the preset standard actions and determine the emotion scores corresponding to the actions made by the target object. The preset standard actions and the preset emotion scores corresponding to the standard actions may be set by those skilled in the art according to actual needs, and embodiments of the present invention are not specifically limited thereto. It is understood that the target object's motion includes facial expression motion, limb motion, and the like.
In one example, the server may input the acquired video into a pre-trained motion judgment model, and determine a motion made by a target object in the video in the music playing process and a time point corresponding to the motion. The preset standard actions comprise laughing, smiling, calmness, anger, fear, sadness, crying, clapping hands, stomping feet, ear covering, rolling and the like, wherein the corresponding emotion score of the laughing is 3 points; the emotion score corresponding to smile is 2; the cry corresponding emotion score is 1 point; the emotion score corresponding to anger is 0; the emotion score corresponding to fear is-1; the emotional score corresponding to sadness is-2; the emotion score corresponding to the large crying is-3; the corresponding emotion score of the clapping hands is 2 points; the emotional score corresponding to stomping is 2 points; the emotion score corresponding to ear covering is-2; the score of the emotion corresponding to the roll is-3, etc.
Step 4032, adjusting the volume of the music according to the emotion scores corresponding to the motions and the time points corresponding to the motions;
specifically, after determining the emotion score corresponding to the motion according to the motion made by the target object, the server may adjust the volume of the music according to the emotion score corresponding to the motion and the time point corresponding to the motion.
In one example, the music to be re-composed is music B, the volume is 60dB, the target object is child B, the server determines that child B performs an action of covering ears in the 41 th second, the emotion score is-2, and the server may adjust the volume of music B from the 40 th second to the 42 th second to 58 dB; child B was sized and clapped at 133 seconds with an emotion score of 3+ 2-5, and the server could adjust the volume of music B from 132 seconds to 134 seconds to 60+ 5-65 dB. The server may also perform a fade-in fade-out process on the volume adjustment, i.e., slowly increasing or decreasing the volume of the music.
In one example, the server may further set a volume upper limit value and a volume lower limit value, where the volume of the creative work is less than or equal to the preset volume upper limit value and greater than or equal to the preset volume upper limit value. Can guarantee that the volume of creative works can not be too big promptly, protect viewer's hearing, can also guarantee that the volume of creative works can not be too little, guarantee the audio effect.
And step 404, acquiring the creative work according to the video and the adjusted music.
Specifically, the server may obtain the creative work from the video and the adjusted music after obtaining the adjusted music. The video of the target object is combined on the basis of the adjusted music, so that the obtained creative work is more vivid, the content is richer, and the quality of the creative work is further improved.
In a specific implementation, the server may determine an audio stream and a video stream of a video, strip the video stream of the video, use the adjusted music as a new audio stream, and synthesize a new complete video file with the stripped video stream as a creative work.
In one example, obtaining the creative work according to the video and the adjusted music may be implemented by the sub-steps shown in fig. 6, which are as follows:
step 4041, determining emotion scores corresponding to the time points of the music according to the emotion scores corresponding to the actions and the time points corresponding to the actions;
specifically, the server may determine the emotion score corresponding to each time point of the music from the emotion score corresponding to the action and the time point corresponding to the action.
In a specific implementation, the target object may make multiple actions at the same time point, and the server may determine the emotion scores corresponding to the time points of the music according to the emotion scores corresponding to the actions and the time points corresponding to the actions, that is, calculate the total sum of the emotion scores at the time points of the music.
In one example, the music to be re-composed is music B, and the target object is child B. Child B covered his ears, rolled and cried on the 99 th second, the server determined that child B had an emotion score of-2-3-8 on the 99 th second of music B, child B had laughted, clapped and stomped on the 183 th second, and the server determined that child B had an emotion score of 3+2+ 2-7 on the 183 th second of music B.
Step 4042, intercepting a highlight moment video from the video according to a time point when the emotion score exceeds a preset emotion score threshold and a preset emotion fluctuation time;
specifically, after determining the emotion scores corresponding to the time points of the music, the server may intercept the highlight video from the video according to the time point at which the emotion score exceeds the emotion score threshold and a preset emotion fluctuation duration. The preset emotion score threshold and the preset emotion time period may be set by those skilled in the art according to actual needs, and embodiments of the present invention are not particularly limited thereto.
In one example, the mood transition period includes a mood start period and a mood end period. The emotion starting time length and the emotion ending time length can be set according to actual needs, and can be the same or different. The server can determine a continuous time period consisting of time points with emotion scores exceeding a preset emotion score threshold value, the continuous time period corresponds to an initial time point and a termination time point, the server determines an emotion starting time point according to the initial time point and the emotion starting time length, determines an emotion termination time point according to the termination time point and the emotion termination time length, and then intercepts a wonderful moment video from the video according to the emotion starting time point and the emotion termination time point. According to the emotion starting time point and the emotion ending time point, the process that the emotion of the target object fluctuates can be completely captured by capturing the highlight moment video in the video, and therefore the highlight degree of the created works is further improved.
Such as: the preset emotion score threshold value is 4 minutes, the preset emotion starting time length and the preset emotion ending time length are both 3 seconds, the server determines that the emotion scores of the child B in the 52 th to 58 th seconds all exceed the preset emotion score threshold value, namely the continuous time period is 6 seconds, the server determines that the emotion starting time point is 49 seconds from the 52 th to the 49 th seconds according to the initial time point of the continuous time period, the server determines that the emotion ending time point is 61 seconds from the 58 th to the 58 th seconds according to the ending time point of the continuous time period, and the server can intercept the contents of the 49 th to the 61 th seconds in the video as the highlight instant video and stores the highlight instant video in a database inside the server.
Step 4043, acquiring the creative work according to the wonderful instant video and the adjusted music.
Specifically, after the server intercepts the highlight moment video from the video, the creative work can be obtained according to the highlight moment video and the adjusted music.
In one example, the server may combine a plurality of highlight videos and preset spectral images as video streams of the creative work and adjusted music as audio streams into a complete video file to obtain the creative work.
In another example, the server may further sort a number of highlight videos in descending order according to the emotional scores and output the highlight videos as a production preview, a highlight segment and the like.
In a second embodiment of the present invention, the acquiring an unconscious behavior and a corresponding time point of a target object during a music playing process includes: recording a video of a target object in the music playing process; acquiring the unconscious behaviors of the target object and corresponding time points according to the video; according to the music after the adjustment, acquire the creative work, include: according to the video with music after the adjustment obtains the creative work, combines the video of target object again on the basis of music after the adjustment, can make the creative work of obtaining more lively, and the content is abundanter, further promotes the quality of creative work.
A third embodiment of the present invention relates to an authoring method. The implementation details of the authoring method of the present embodiment are specifically described below, the following are provided only for the convenience of understanding, and are not necessary for implementing the present embodiment, and fig. 7 is a schematic diagram of the authoring method according to the third embodiment of the present invention, and includes:
step 501, cutting music into n segments to obtain n segments of music;
in a specific implementation, the embodiment of the present invention supports a plurality of target objects to perform co-creation, and the server may determine the number n of the target objects, where n is an integer greater than 1, and then segment the music into n segments to obtain n segments of music, where the n segments of music correspond to the n target objects respectively. A plurality of target objects participate in the creation process together, so that the interactivity and the interestingness of creation can be further improved.
In one example, the number of target objects is 3, which are respectively a child D, a child E, and a child F, and the server averagely cuts the music piece to be re-composed into 3 pieces, so as to obtain a first music piece corresponding to the child D, a second music piece corresponding to the child E, and a third music piece corresponding to the child F.
Step 502, in the process of playing n music segments, acquiring the unconscious behaviors and corresponding time points of a target object corresponding to the currently played music segment to obtain the unconscious behaviors and corresponding time points of n target objects;
specifically, after obtaining n pieces of music pieces, the server may obtain the unconscious behaviors and corresponding time points of the target objects corresponding to the currently played music piece in the process of playing the n pieces of music pieces, and obtain the unconscious behaviors and corresponding time points of the n target objects.
In one example, the co-creation may be a remote co-creation, and the server may play a first music piece for the child D to obtain an unconscious behavior and a corresponding time point of the child D; playing a second music fragment for the child E to acquire the unconscious behavior of the child E and a corresponding time point; and playing the first music fragment for the child F, and acquiring the unconscious behaviors and the corresponding time point of the child F.
In another example, the coauthoring may be local coauthoring, and the server acquires the unconscious behavior of the child D and the corresponding time point while playing the first music piece; when the second music piece is played, acquiring the unconscious behaviors of the child E and the corresponding time point; when the third music piece is played, the unconscious behavior of the child F and the corresponding time point are obtained.
Step 503, respectively adjusting the characteristics of n music segments according to the unconscious behaviors of the n target objects and the corresponding time points;
specifically, after the server obtains the unconscious behaviors of the n target objects and the corresponding time points, the server can respectively adjust the characteristics of the n pieces of music according to the unconscious behaviors of the n target objects and the corresponding time points.
In one example, the server may adjust the characteristics of the first piece of music based on the involuntary behavior of child D and the corresponding point in time; adjusting the characteristics of the second music piece according to the unconscious behaviors of the child E and the corresponding time point; the characteristics of the third piece of music are adjusted according to the involuntary behaviour of the child F and the corresponding point in time.
And step 504, acquiring the creative work according to the adjusted n sections of music.
Specifically, after the server respectively adjusts the characteristics of n pieces of music pieces according to the unconscious behaviors of the n target objects and the corresponding time points, the server can acquire the creative work according to the adjusted music.
In specific implementation, after the server has adjusted the characteristics of the n pieces of music, the n pieces of music may be combined to obtain complete music, and the complete music may be subjected to noise filtering, reverberation addition, tone quality improvement, and the like to obtain an audio file, which is then output to create works and stored in a database inside the server or uploaded to a preset cloud database, and the like.
In one example, after the server obtains the creative work, the creative work can be played immediately for subsequent repair, improvement, and the like.
In a third embodiment of the present invention, the number of the target objects is n, and n is an integer greater than 1; in the music playing process, acquiring the unconscious behavior and the corresponding time point of the target object comprises the following steps: cutting the music into n sections to obtain n sections of music fragments; wherein, n music pieces correspond to n target objects respectively; in the process of playing the n music segments, acquiring the unconscious behaviors and corresponding time points of the target object corresponding to the currently played music segment to obtain the unconscious behaviors and corresponding time points of the n target objects; said adjusting characteristics of said music according to said unconscious behavior and said corresponding point in time, comprising: and respectively adjusting the characteristics of the n music pieces according to the unconscious behaviors of the n target objects and the corresponding time points. A plurality of target objects participate in the creation process together, so that the interactivity and the interestingness of creation can be further improved.
A fourth embodiment of the present invention relates to an authoring method. The implementation details of the authoring method of the present embodiment are specifically described below, the following are provided only for the convenience of understanding, and are not necessary for implementing the present embodiment, and fig. 8 is a schematic diagram of the authoring method according to the fourth embodiment of the present invention, and includes:
step 601, acquiring n ranges corresponding to music;
specifically, the server may obtain n ranges corresponding to music.
In a specific implementation, the embodiment of the present invention supports a plurality of target objects to perform coauthoring, and the server may determine the number n of the target objects first, where n is an integer greater than 1. If the music itself has n ranges, the server can directly split the music into the n ranges to obtain the n ranges corresponding to the music. If the music itself does not have n ranges, that is, the music itself is smaller than the n ranges, the server may expand the music to the n ranges to obtain n ranges corresponding to the music, where the music of the n ranges respectively corresponds to the n target objects, and the n target objects respectively compose the music of the n ranges. And acquiring n ranges corresponding to the music and jointly creating by a plurality of target objects, so that the interactivity and interestingness of creation can be further improved.
In one example, the number of the target objects is 3, the target objects are respectively a child D, a child E and a child F, the music piece itself has 3 ranges, and the server directly splits the music piece to be re-composed into 3 ranges to obtain music of a first range corresponding to the child D, music of a second range corresponding to the child E and music of a third range corresponding to the child F.
In one example, the number of target objects is 3, which are respectively a child D, a child E, and a child F, the number of music cubes per se is less than 3 ranges, and the server expands the music cubes to be re-created to 3 ranges to obtain music of a first range corresponding to the child D, music of a second range corresponding to the child E, and music of a third range corresponding to the child F.
Step 602, acquiring the unconscious behaviors of n target objects and corresponding time points in the music playing process;
specifically, the server may obtain the unconscious behaviors and the corresponding time points of the n target objects during the music playing process.
In one example, the co-creation may be a remote co-creation, and the server may play music for the child D to obtain an unconscious behavior and a corresponding time point of the child D; playing the music for the child E to acquire the unconscious behaviors and corresponding time points of the child E; the music is also played for the child F, and the unconscious behavior and the corresponding time point of the child F are obtained.
In another example, the coauthoring may be a local coauthoring, and the server may obtain the unconscious behavior and corresponding time point of child D, the unconscious behavior and corresponding time point of child E, and the unconscious behavior and corresponding time point of child F, respectively, while playing music.
Step 603, adjusting the characteristics of the music of n ranges according to the unconscious behaviors of the n target objects and the corresponding time points;
specifically, after acquiring the unconscious behaviors of the n target objects and the corresponding time points, the server may adjust the characteristics of the n musical ranges according to the unconscious behaviors of the n target objects and the corresponding time points.
In one example, the unconscious behaviors of the n target objects include behaviors of the n target objects to make sound, the corresponding time points include time points corresponding to the behaviors of the n target objects to make sound, the server can take the original music as the first audio track and establish n new null audio tracks according to the time axis of the first audio track, the time axes of the n new null audio tracks are aligned with the time axis of the first audio track, and therefore, the time points corresponding to the behaviors of the n target objects to make sound also correspond to the n newly-created null audio tracks. The server may add the unintentional behaviors of the n target objects to the time points of the n new empty tracks corresponding to the behaviors of the n target objects to make sounds, to obtain the tracks corresponding to the respective target objects, i.e., the tracks of the n-range music. The characteristics of the n-range music are adjusted based on the first track and the tracks of the n-range music.
Such as: the number of the target objects is 3, the target objects are respectively a child D, a child E and a child F, the server expands the music to be re-created to 3 ranges to obtain music of the 3 ranges, the server extracts a first music track corresponding to the music and establishes three null music tracks according to a time axis of the first music track, and the time axes of the three null music tracks are aligned with the time axis of the first music track. The server obtains a sound track corresponding to the child D by adding the sound emitted by the child D to a time point corresponding to the action of emitting the sound by the child D in the first empty sound track; adding the sound made by the child E to a time point corresponding to the action of making the sound by the child E in the second empty audio track to obtain an audio track corresponding to the child E; the sound made by child F is added to the third empty track at the point in time corresponding to the action of child F making the sound, obtaining the track corresponding to child F. The server combines the first track, the track corresponding to child D, the track corresponding to child E and the track corresponding to child F, thereby adjusting the characteristics of the music of the 3 ranges.
And step 604, acquiring the creative work according to the adjusted music.
Specifically, after adjusting the characteristics of the music of n ranges according to the unconscious behaviors of the n target objects and the corresponding time points, the server may acquire the creative work according to the adjusted music.
In one example, the server combines the first audio track, the audio track corresponding to the child D, the audio track corresponding to the child E, and the audio track corresponding to the child F to obtain a complete music, and performs operations such as noise filtering, reverberation adding, and sound quality improvement on the complete music to obtain an audio file, and outputs the creative work, and the creative work is stored in a database inside the server or uploaded to a preset cloud database.
In one example, after the server obtains the creative work, the creative work can be played immediately for subsequent repair, improvement, and the like. In a third embodiment of the present invention, the number of the target objects is n, and n is an integer greater than 1; in the music playing process, acquiring the unconscious behavior and the corresponding time point of the target object comprises the following steps: acquiring n ranges corresponding to music; wherein the n ranges respectively correspond to the n target objects; in the music playing process, acquiring the unconscious behaviors and corresponding time points of n target objects; adjusting characteristics of the music according to the unconscious behavior and the time point, including: and adjusting the characteristics of the music of the n ranges according to the unconscious behaviors of the n target objects and the corresponding time points. And the music is jointly created by a plurality of target objects according to the n ranges corresponding to the music, so that the interactivity and the interestingness of creation can be further improved.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
A fifth embodiment of the present invention relates to an electronic apparatus, as shown in fig. 9, including: at least one processor 701; and a memory 702 communicatively coupled to the at least one processor 701; the memory 702 stores instructions executable by the at least one processor 701, and the instructions are executed by the at least one processor 701, so that the at least one processor 701 can execute the authoring method in the above embodiments.
Where the memory and processor are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting together one or more of the various circuits of the processor and the memory. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over a wireless medium via an antenna, which further receives the data and transmits the data to the processor.
The processor is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory may be used to store data used by the processor in performing operations.
A sixth embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.

Claims (11)

1. An authoring method, comprising:
in the music playing process, acquiring the unconscious behavior of a target object and a corresponding time point;
adjusting the characteristics of the music according to the unconscious behaviors and the time points;
and acquiring the creative work according to the adjusted music.
2. An authoring method according to claim 1, characterized in that said involuntary behaviour comprises a behaviour of a sound, said point in time comprising a point in time corresponding to said behaviour of the sound;
adjusting characteristics of the music according to the unconscious behavior and the time point, including:
extracting a first music track corresponding to the music;
establishing an empty audio track; wherein the time axis of the empty track is the same as the time axis of the first track;
adding the sound to a target location in the empty audio track, obtaining a second audio track; wherein the target position is a time point corresponding to the behavior of the sound;
adjusting a characteristic of the music based on the first audio track and the second audio track.
3. The authoring method of claim 2 wherein the behavior of sound comprises a behavior of beat sound;
said adding said sound to a target location in said empty audio track, acquiring a second audio track, comprising:
determining the corresponding strength of the behavior of the beat sound;
adjusting the volume of a preset sound effect audio according to the strength of the beat to obtain a sound effect audio corresponding to the behavior of the beat sound;
adding a sound effect audio corresponding to the behavior of the beat sound to a target position in the empty audio track to obtain a second audio track; and the target position is a time point corresponding to the behavior of the beat sound.
4. The authoring method according to claim 1, wherein the acquiring the unconscious behavior and the corresponding time point of the target object during the music playing process comprises:
recording a video of a target object in the music playing process;
acquiring the unconscious behaviors of the target object and corresponding time points according to the video;
according to the music after the adjustment, acquire the creative work, include:
and acquiring the creative works according to the video and the adjusted music.
5. An authoring method according to claim 4, characterized in that said characteristics of music comprise volume, said involuntary behaviour comprises an action, said corresponding point in time comprises a point in time corresponding to said action;
adjusting characteristics of the music according to the unconscious behavior and the time point, including:
determining an emotion score corresponding to the action;
and adjusting the volume of the music according to the emotion score corresponding to the action and the time point corresponding to the action.
6. The authoring method of claim 5 wherein said obtaining a creative work from said video and said adjusted music comprises:
determining emotion scores corresponding to all time points of the music according to the emotion scores corresponding to the actions and the time points corresponding to the actions;
intercepting a wonderful instant video from the video according to a time point when the emotion score exceeds a preset emotion score threshold and a preset emotion fluctuation time;
and acquiring the creative work according to the wonderful instant video and the adjusted music.
7. The authoring method according to claim 6, wherein the mood transition period comprises a mood start period and a mood end period;
the capturing of the highlight video in the video according to the time point when the emotion score exceeds a preset emotion score threshold and the preset emotion fluctuation time comprises the following steps:
determining a continuous time period consisting of time points with emotion scores exceeding a preset emotion score threshold; wherein the continuous time period corresponds to an initial time point and a termination time point;
determining an emotion starting time point according to the initial time point and the emotion starting time length;
determining the emotion termination time point according to the termination time point and the emotion termination time length;
and intercepting a wonderful moment video in the video according to the emotion starting time point and the emotion ending time point.
8. An authoring method according to claim 1, characterized in that the number of said target objects is n, said n being an integer greater than 1;
in the music playing process, acquiring the unconscious behavior and the corresponding time point of the target object comprises the following steps:
cutting the music into n sections to obtain n sections of music fragments; wherein, n music pieces correspond to n target objects respectively;
in the process of playing the n music segments, acquiring the unconscious behaviors and corresponding time points of the target object corresponding to the currently played music segment to obtain the unconscious behaviors and corresponding time points of the n target objects;
adjusting characteristics of the music according to the unconscious behavior and the time point, including:
and respectively adjusting the characteristics of the n music pieces according to the unconscious behaviors of the n target objects and the corresponding time points.
9. An authoring method according to claim 1, characterized in that the number of said target objects is n, said n being an integer greater than 1;
in the music playing process, acquiring the unconscious behavior and the corresponding time point of the target object comprises the following steps:
acquiring n ranges corresponding to the music; wherein, n ranges respectively correspond to n target objects; in the music playing process, acquiring the unconscious behaviors and corresponding time points of n target objects;
adjusting characteristics of the music according to the unconscious behavior and the time point, including:
and adjusting the characteristics of the music of the n ranges according to the unconscious behaviors of the n target objects and the corresponding time points.
10. An electronic device, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform an authoring method as claimed in any one of claims 1 to 9.
11. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the authoring method of any one of claims 1 to 9.
CN202110093846.4A 2021-01-22 2021-01-22 Authoring method, electronic device, and computer-readable storage medium Active CN112927665B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110093846.4A CN112927665B (en) 2021-01-22 2021-01-22 Authoring method, electronic device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110093846.4A CN112927665B (en) 2021-01-22 2021-01-22 Authoring method, electronic device, and computer-readable storage medium

Publications (2)

Publication Number Publication Date
CN112927665A true CN112927665A (en) 2021-06-08
CN112927665B CN112927665B (en) 2022-08-30

Family

ID=76165736

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110093846.4A Active CN112927665B (en) 2021-01-22 2021-01-22 Authoring method, electronic device, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN112927665B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024093798A1 (en) * 2022-10-31 2024-05-10 北京字跳网络技术有限公司 Music composition method and apparatus, and electronic device and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008117406A1 (en) * 2007-03-26 2008-10-02 Pioneer Corporation Awaking device and awaking method
CN202816087U (en) * 2012-10-24 2013-03-20 胡茂芳 Infant nursing instrument
CN108919953A (en) * 2018-06-29 2018-11-30 咪咕文化科技有限公司 A kind of music method of adjustment, device and storage medium
CN110853606A (en) * 2019-11-26 2020-02-28 Oppo广东移动通信有限公司 Sound effect configuration method and device and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008117406A1 (en) * 2007-03-26 2008-10-02 Pioneer Corporation Awaking device and awaking method
CN202816087U (en) * 2012-10-24 2013-03-20 胡茂芳 Infant nursing instrument
CN108919953A (en) * 2018-06-29 2018-11-30 咪咕文化科技有限公司 A kind of music method of adjustment, device and storage medium
CN110853606A (en) * 2019-11-26 2020-02-28 Oppo广东移动通信有限公司 Sound effect configuration method and device and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RYAN HOURIGAN等: "Teaching music to children with autism: Understandings and perspectives", 《MUSIC EDUCATORS JOURNAL》 *
张琦等: "基于无意识行为的可穿戴式交互玩具设计研究", 《创意设计源》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024093798A1 (en) * 2022-10-31 2024-05-10 北京字跳网络技术有限公司 Music composition method and apparatus, and electronic device and readable storage medium

Also Published As

Publication number Publication date
CN112927665B (en) 2022-08-30

Similar Documents

Publication Publication Date Title
US11032616B2 (en) Selectively incorporating feedback from a remote audience
Geoghegan et al. Podcast solutions: The complete guide to audio and video podcasting
Kjus Live and recorded: Music experience in the digital millennium
Neumark Doing things with voices: Performativity and voice
EP3142383B1 (en) Terminal sound mixing system and playing method
CN106792013A (en) A kind of method, the TV interactive for television broadcast sounds
CN104618446A (en) Multimedia pushing implementing method and device
WO2019114015A1 (en) Robot performance control method and robot
Johnson et al. Machinima: the art and practice of virtual filmmaking
CN112927665B (en) Authoring method, electronic device, and computer-readable storage medium
CN113439447A (en) Room acoustic simulation using deep learning image analysis
Arantxa et al. Online prosumer convergence: Listening, creating and sharing music on YouTube and TikTok.
WO2023071166A1 (en) Data processing method and apparatus, and storage medium and electronic apparatus
JP6196839B2 (en) A communication karaoke system characterized by voice switching processing during communication duets
WO2021246104A1 (en) Control method and control system
CN106231480B (en) A kind of method and system for realizing sound equipment output based on Spotify
CN111445742B (en) Vocal music teaching system based on distance education system
US20230042477A1 (en) Reproduction control method, control system, and program
WO2024053094A1 (en) Media information emphasis playback device, media information emphasis playback method, and media information emphasis playback program
Wingstedt The aesthetic potential of vocal sound in online learning situations
US20240015368A1 (en) Distribution system, distribution method, and non-transitory computer-readable recording medium
Sudarsono Soundscape composition and relationship between sound objects and soundscape dimensions of an urban area
WO2024056078A1 (en) Video generation method and apparatus and computer-readable storage medium
Green The Podcaster's Audio Handbook
JP2023109715A (en) Audio control system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant