US20230079835A1 - Information processing device - Google Patents
Information processing device Download PDFInfo
- Publication number
- US20230079835A1 US20230079835A1 US17/943,283 US202217943283A US2023079835A1 US 20230079835 A1 US20230079835 A1 US 20230079835A1 US 202217943283 A US202217943283 A US 202217943283A US 2023079835 A1 US2023079835 A1 US 2023079835A1
- Authority
- US
- United States
- Prior art keywords
- production
- parts
- length
- control section
- performer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/2228—Video assist systems used in motion picture production, e.g. video cameras connected to viewfinders of motion picture cameras or related video signal processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
- G06T2207/10021—Stereoscopic video; Stereoscopic image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Definitions
- the present disclosure relates to a technology for performing production according to movement of a target photographed by a camera.
- a library for estimating the posture of a human has been disclosed as an open source.
- This library detects characteristic points such as joint positions of a human from a two-dimensional image by using a neural network in which deep learning is performed, and estimates the posture of the human by connecting the characteristic points to each other.
- Japanese Patent Laid-Open No. 2020-204890 discloses a robot system that estimates the posture of a human photographed by using such a posture estimating model and synchronizes the posture of a robot device with the estimated posture of the user.
- production In a live venue or a concert hall, production is performed which changes lighting and changes the volume of sound according to movement of a performer. Such production has been performed manually.
- a person in charge of production determines start timing and end timing of production, and manually controls lighting apparatuses and sound apparatuses.
- the person in charge of production observes movement of the performer, and performs work of synchronizing production with the movement of the performer.
- a burden of the work is heavy.
- an information processing device including an obtaining section configured to obtain position information of a plurality of parts of a photographing target, a deriving section configured to derive a length between two parts, and a production control section configured to perform production on the basis of the derived length.
- an information processing device including an obtaining section configured to obtain position information of a plurality of parts of a photographing target, a deriving section configured to derive a direction connecting two parts to each other, and a production control section configured to perform production on the basis of the derived direction.
- FIG. 1 is a diagram illustrating an example of a production system that performs production according to movement of a performer
- FIG. 2 is a diagram illustrating a configuration of an information processing device according to an embodiment
- FIG. 3 is a diagram illustrating an example of a result of estimating the positions of a plurality of parts of the performer
- FIG. 4 is a diagram illustrating a concrete example of the plurality of parts of the performer
- FIG. 5 is a diagram illustrating an example of a derived length between two parts
- FIG. 6 is a diagram illustrating another example of the derived length between the two parts
- FIG. 7 is a diagram illustrating an example of a derived direction connecting two parts to each other
- FIG. 8 is a diagram illustrating an example of video generated by a production control section.
- FIG. 9 is a diagram illustrating an example of video in which the production control section performs video production.
- FIG. 1 illustrates an example of a production system 1 that performs production according to the movement of a performer.
- a stage 5 is provided with a plurality of lighting apparatuses 2 for performing light production and a plurality of sound apparatuses 3 for performing sound production.
- the production system 1 may be used in a venue in which the performer performs in front of an audience, the venue being a concert hall, an outdoor stage, or the like. While FIG. 1 illustrates a state in which production apparatuses including the lighting apparatuses 2 and the sound apparatuses 3 are provided to the stage 5 , these production apparatuses may be provided in the vicinity of the stage 5 . It suffices for the production apparatuses to be arranged at such positions as to be able to provide light production and/or sound production to the performer and the audience in the vicinity of the stage 5 .
- a camera 4 photographs the performer during performance on the stage 5 in predetermined cycles.
- the camera 4 is a three-dimensional camera capable of obtaining depth information.
- the camera 4 may be a stereo camera or a time of flight (ToF) camera.
- the camera 4 photographs a three-dimensional space in which the performer is present in predetermined cycles (for example, 30 frames/sec).
- the lighting apparatuses 2 each include a movable unit that can change an irradiation direction of light.
- the color and light amount of the irradiation light are dynamically changed by an information processing device (see FIG. 2 ) to be described later.
- the sound apparatuses 3 each include a movable unit that can change an output direction of sound. A volume and an effect of the output sound are dynamically changed by the information processing device.
- the information processing device is provided with an image of the photographed performer from the camera 4 , and controls light production by the lighting apparatuses 2 and/or sound production by the sound apparatuses 3 on the basis of the image. The information processing device thereby enlivens the live performance.
- FIG. 2 illustrates a configuration of an information processing device 10 according to an embodiment.
- the information processing device 10 includes an estimating section 20 that estimates the posture and/or position of the performer and a control section 30 that controls production.
- the control section 30 includes an obtaining section 32 , a deriving section 34 , and a production control section 36 . During the performance by the performer, the control section 30 outputs music from the sound apparatuses 3 .
- the elements described as functional blocks performing various processes in FIG. 2 can each include a circuit block, a memory, or another large-scale integrated (LSI) circuit or central processing unit (CPU) in terms of hardware, and are implemented by a program loaded in a memory or the like in terms of software.
- LSI large-scale integrated
- CPU central processing unit
- these functional blocks can be implemented in various forms by only hardware, only software, or combinations of hardware and software, and are not limited to one of the forms.
- the estimating section 20 and the control section 30 may be implemented by the same processor and may be implemented by separate processors.
- the estimating section 20 receives the image of the performer as a photographing target photographed by the camera 4 , and estimates the positions of a plurality of parts of a body of the performer. Various methods for recognizing the positions of parts of a human body have been proposed. The estimating section 20 may estimate the position of each part of the performer by using an existing posture estimating technology.
- FIG. 3 illustrates an example of a result of estimating the positions of the plurality of parts of the performer.
- the estimating section 20 estimates the positions of the plurality of parts of the performer in the three-dimensional space, and provides the positions of the plurality of parts of the performer to the control section 30 .
- the estimating section 20 can estimate the posture and position of the performer in the three-dimensional space by estimating the positions of the plurality of parts of the performer in the three-dimensional space and coupling two adjacent parts to each other by a straight line (bone).
- the estimating section 20 performs posture estimation processing in real time, that is, performs the posture estimation processing in the same cycles as photographing cycles of the camera 4 , and provides an estimation result to the control section 30 .
- the obtaining section 32 obtains position information of the plurality of parts of the body of the performer as a photographing target in predetermined cycles.
- the performer may be a dancer dancing to music, and the posture and position of the performer on the stage 5 change with the passage of time.
- the production control section 36 performs light production and/or sound production by controlling the lighting apparatuses 2 and/or the sound apparatuses 3 according to movement of the performer.
- FIG. 4 illustrates a concrete example of the plurality of parts of the performer.
- the obtaining section 32 obtains position information of 19 parts of the body of the performer. Specifically, the obtaining section 32 obtains position information of a nose 50 a , a neck 50 b , a right shoulder 50 c , a right elbow 50 d , a right wrist 50 e , a right hand 50 f , a left shoulder 50 g , a left elbow 50 h , a left wrist 50 i , a left hand 50 j , a central waist 50 k , a right waist 50 l , a right knee 50 m , a right ankle 50 n , a right toe 50 o , a left waist 50 p , a left knee 50 q , a left ankle 50 r , and a left toe 50 s.
- a human body model adopted in the embodiment, 19 parts are defined, and for each part, parts adjacent to the part are defined.
- the nose 50 a , the right shoulder 50 c , the left shoulder 50 g , and the central waist 50 k are defined as adjacent parts, and two adjacent parts are coupled to each other by a bone.
- the right elbow 50 d the right shoulder 50 c and the right wrist 50 e are defined as adjacent parts.
- the estimating section 20 estimates the position information of the plurality of parts of the performer on the basis of the human body model.
- the deriving section 34 derives a length between two parts.
- the production control section 36 performs production on the basis of the derived length between the two parts.
- FIG. 5 illustrates an example of the derived length between the two parts.
- the deriving section 34 derives a length L between the right hand 50 f and the left hand 50 j .
- the production control section 36 performs production on the basis of the derived length L.
- the length L may be derived from a three-dimensional coordinate value of the right hand 50 f and a three-dimensional coordinate value of the left hand 50 j . It is to be noted that the right hand 50 f and the left hand 50 j are an example and that the deriving section 34 may derive the length L between two other parts.
- FIG. 6 illustrates another example of the derived length between the two parts.
- the performer is moving at all times.
- the length L between the right hand 50 f and the left hand 50 j therefore changes incessantly.
- the production control section 36 may adjust the volume of music output by the sound apparatuses 3 according to the derived length L. For example, the production control section 36 may increase the volume of the music when the length L becomes long, and the production control section 36 may decrease the volume of the music when the length L becomes short. Incidentally, the production control section 36 may decrease the volume of the music when the length L becomes long, and the production control section 36 may increase the volume of the music when the length L becomes short.
- the production control section 36 may apply a sound effect corresponding to the length L to the music in addition to the volume, and may change the parameter of the sound effect according to the length L. For example, the production control section 36 may amplify and emphasize a high frequency range when the length L becomes long, and the production control section 36 may amplify and emphasize a low frequency range when the length L becomes short.
- the production control section 36 may adjust amounts of light of the lighting apparatuses 2 according to the derived length L. For example, the production control section 36 may increase the light amounts when the length L becomes long, and the production control section 36 may decrease the light amounts when the length L becomes short. At this time, the production control section 36 may adjust the light amounts of the lighting apparatuses 2 according to the derived length L while controlling the movable units of the plurality of lighting apparatuses 2 so as to irradiate the right hand 50 f or the left hand 50 j with light. Incidentally, the production control section 36 may decrease the light amounts when the length L becomes long, and the production control section 36 may increase the light amounts when the length L becomes short. The production control section 36 may adjust the color of the irradiation light in addition to the light amounts, and may change the color of the irradiation light according to the length L.
- the deriving section 34 derive a length between two parts not adjacent to each other in the human body model and that the production control section 36 perform production on the basis of the length between the two parts not adjacent to each other.
- the right hand 50 f and the left hand 50 j described above are an example of the two parts not adjacent to each other in the human body model.
- the length L between the two parts not adjacent to each other can be changed greatly as compared with the length between two adjacent parts, and is therefore suitable for use as a dynamic production parameter.
- the deriving section 34 may derive, as a production parameter, a length between one part of a right half body of a human body and one part of a left half body of the human body.
- the performer may give performance paying attention to the length between the two parts, for example, give a performance of moving in a left-right direction greatly, by recognizing the two parts set as a production parameter.
- the deriving section 34 may derive, as a production parameter, a length between one part of an upper half of the human body and one part of a lower half of the human body.
- the performer may give performance paying attention to the length between the two parts, for example, give a performance of moving in an upward-downward direction greatly, by recognizing the two parts set as a production parameter.
- the deriving section 34 may derive lengths between a predetermined plurality of sets as a production parameter without being limited to the length between one predetermined set (two specific parts), and the production control section 36 may perform production on the basis of the length between each set.
- the production control section 36 may perform sound production that controls the sound apparatuses 3 on the basis of a length between parts of a first set, and may perform light production that controls the lighting apparatuses 2 on the basis of a length between parts of a second set different from the first set.
- the deriving section 34 may derive a direction connecting two parts to each other as another production parameter.
- the production control section 36 performs production on the basis of the derived direction.
- FIG. 7 illustrates an example of the derived direction connecting the two parts to each other.
- the deriving section 34 derives a direction vector D connecting the left elbow 50 h and the left hand 50 j to each other.
- the production control section 36 performs production on the basis of the derived direction vector D.
- the deriving section 34 may set a more inward part as viewed from a central part in the human body model as a starting point and set a more outward part as an end point, and determine the direction of the direction vector.
- the left elbow 50 h and the left hand 50 j are compared with each other, the left elbow 50 h is closer to the central part than the left hand 50 j .
- the production control section 36 therefore derives the direction vector D having the left elbow 50 h as a starting point and having the left hand 50 j as an end point.
- the left elbow 50 h and the left hand 50 j are taken as an example, and the deriving section 34 may derive a direction connecting two other parts to each other.
- the production control section 36 may adjust the irradiation directions of light of the lighting apparatuses 2 according to the derived direction vector D.
- the production control section 36 may control the movable unit of each lighting apparatus 2 such that the plurality of lighting apparatuses 2 apply light to one point (cross mark on a dotted line in FIG. 7 ) on a half straight line obtained by extending the direction vector D in a direction from the starting point to the end point.
- the performer can thereby brightly illuminate one point in the direction from the left elbow to the left hand.
- a distance from the end point of the direction vector D to the one point on the half straight line may be a predetermined distance, or may be set at predetermined times the length of the direction vector.
- the deriving section 34 may derive a direction connecting two parts in a left arm or a right arm to each other and use the direction as a production parameter.
- a direction connecting two parts of other than the arm to each other may be derived.
- the production control section 36 can perform production automatically on the basis of a production parameter derived by the deriving section 34 .
- the present embodiment is illustrative, and it is to be understood by those skilled in the art that combinations of constituent elements and processing processes of the embodiment are susceptible of various modifications and that such modifications also fall within the scope of the present technology.
- the photographing target of the camera 4 is a live performer.
- the photographing target may be a person other than the live performer.
- the photographing target of the camera 4 may be anything as long as a body model is established and the position information of a plurality of parts can be obtained from a photographed image.
- the photographing target may, for example, be a human type or pet type autonomous robot.
- the camera 4 may be a camera that obtains a two-dimensional image not having depth information.
- the obtaining section 32 derives the length L between two parts as a production parameter.
- a length between one part of the performer and a production apparatus may be derived and used as a production parameter.
- the production system 1 may be used in a venue in which the performer performs in front of an audience.
- the production system 1 may be used when the performance of the performer is live distribution.
- the production control section 36 may perform video production on the basis of a production parameter derived by the deriving section 34 .
- FIG. 8 illustrates an example of video generated by the production control section 36 .
- the production control section 36 obtains an image photographed by the camera 4 , and generates video 60 for distribution.
- FIG. 9 illustrates an example of video in which the production control section 36 performs video production.
- the production control section 36 performs production (image processing) as if a bolt of lightning streaked between the right hand and the left hand of the performer.
- the production control section 36 may control the size of the bolt of lightning streaked between the right hand and the left hand of the performer included in the video 60 on the basis of the length L between the right hand 50 f and the left hand 50 j , which is derived by the deriving section 34 .
- the production control section 36 may perform video production on the basis of the direction vector D.
- the production control section 36 controls the movable units of the respective lighting apparatuses 2 such that the plurality of lighting apparatuses 2 apply light to one point on the half straight line obtained by extending the direction vector D.
- the production control section 36 may set a virtual sound source that outputs sound at one point on the half straight line obtained by extending the direction vector D, dispose a light bulb at the position of the virtual sound source, and perform video production such that the virtual sound source moves according to movement of an arm of the performer.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Geometry (AREA)
- Processing Or Creating Images (AREA)
- Circuit Arrangement For Electric Light Sources In General (AREA)
Abstract
Description
- This application claims the benefit of U.S. Patent Application No. 63/243,780 filed Sep. 14, 2021, the entire contents of which are incorporated herein by reference.
- The present disclosure relates to a technology for performing production according to movement of a target photographed by a camera.
- Recently, a library for estimating the posture of a human has been disclosed as an open source. This library detects characteristic points such as joint positions of a human from a two-dimensional image by using a neural network in which deep learning is performed, and estimates the posture of the human by connecting the characteristic points to each other. Japanese Patent Laid-Open No. 2020-204890 discloses a robot system that estimates the posture of a human photographed by using such a posture estimating model and synchronizes the posture of a robot device with the estimated posture of the user.
- In a live venue or a concert hall, production is performed which changes lighting and changes the volume of sound according to movement of a performer. Such production has been performed manually. A person in charge of production determines start timing and end timing of production, and manually controls lighting apparatuses and sound apparatuses. During performance by the performer, the person in charge of production observes movement of the performer, and performs work of synchronizing production with the movement of the performer. However, a burden of the work is heavy.
- It is desirable to provide a technology that automatically performs production according to movement of a target such as a performer.
- According to an aspect of the present technology, there is provided an information processing device including an obtaining section configured to obtain position information of a plurality of parts of a photographing target, a deriving section configured to derive a length between two parts, and a production control section configured to perform production on the basis of the derived length.
- According to another aspect of the present technology, there is provided an information processing device including an obtaining section configured to obtain position information of a plurality of parts of a photographing target, a deriving section configured to derive a direction connecting two parts to each other, and a production control section configured to perform production on the basis of the derived direction.
- It is to be noted that optional combinations of the above constituent elements and modes obtained by converting expressions of the present technology between a method, a device, a system, a recording medium, a computer program, and the like are also effective as modes of the present technology.
-
FIG. 1 is a diagram illustrating an example of a production system that performs production according to movement of a performer; -
FIG. 2 is a diagram illustrating a configuration of an information processing device according to an embodiment; -
FIG. 3 is a diagram illustrating an example of a result of estimating the positions of a plurality of parts of the performer; -
FIG. 4 is a diagram illustrating a concrete example of the plurality of parts of the performer; -
FIG. 5 is a diagram illustrating an example of a derived length between two parts; -
FIG. 6 is a diagram illustrating another example of the derived length between the two parts; -
FIG. 7 is a diagram illustrating an example of a derived direction connecting two parts to each other; -
FIG. 8 is a diagram illustrating an example of video generated by a production control section; and -
FIG. 9 is a diagram illustrating an example of video in which the production control section performs video production. -
FIG. 1 illustrates an example of aproduction system 1 that performs production according to the movement of a performer. Astage 5 is provided with a plurality oflighting apparatuses 2 for performing light production and a plurality ofsound apparatuses 3 for performing sound production. Theproduction system 1 may be used in a venue in which the performer performs in front of an audience, the venue being a concert hall, an outdoor stage, or the like. WhileFIG. 1 illustrates a state in which production apparatuses including thelighting apparatuses 2 and thesound apparatuses 3 are provided to thestage 5, these production apparatuses may be provided in the vicinity of thestage 5. It suffices for the production apparatuses to be arranged at such positions as to be able to provide light production and/or sound production to the performer and the audience in the vicinity of thestage 5. - A
camera 4 photographs the performer during performance on thestage 5 in predetermined cycles. Thecamera 4 is a three-dimensional camera capable of obtaining depth information. Thecamera 4 may be a stereo camera or a time of flight (ToF) camera. Thecamera 4 photographs a three-dimensional space in which the performer is present in predetermined cycles (for example, 30 frames/sec). - The
lighting apparatuses 2 each include a movable unit that can change an irradiation direction of light. The color and light amount of the irradiation light are dynamically changed by an information processing device (seeFIG. 2 ) to be described later. Thesound apparatuses 3 each include a movable unit that can change an output direction of sound. A volume and an effect of the output sound are dynamically changed by the information processing device. In theproduction system 1, the information processing device is provided with an image of the photographed performer from thecamera 4, and controls light production by thelighting apparatuses 2 and/or sound production by thesound apparatuses 3 on the basis of the image. The information processing device thereby enlivens the live performance. -
FIG. 2 illustrates a configuration of aninformation processing device 10 according to an embodiment. Theinformation processing device 10 includes an estimatingsection 20 that estimates the posture and/or position of the performer and acontrol section 30 that controls production. Thecontrol section 30 includes an obtainingsection 32, a derivingsection 34, and aproduction control section 36. During the performance by the performer, thecontrol section 30 outputs music from thesound apparatuses 3. - The elements described as functional blocks performing various processes in
FIG. 2 can each include a circuit block, a memory, or another large-scale integrated (LSI) circuit or central processing unit (CPU) in terms of hardware, and are implemented by a program loaded in a memory or the like in terms of software. Hence, it is to be understood by those skilled in the art that these functional blocks can be implemented in various forms by only hardware, only software, or combinations of hardware and software, and are not limited to one of the forms. In theinformation processing device 10, the estimatingsection 20 and thecontrol section 30 may be implemented by the same processor and may be implemented by separate processors. - The estimating
section 20 receives the image of the performer as a photographing target photographed by thecamera 4, and estimates the positions of a plurality of parts of a body of the performer. Various methods for recognizing the positions of parts of a human body have been proposed. The estimatingsection 20 may estimate the position of each part of the performer by using an existing posture estimating technology. -
FIG. 3 illustrates an example of a result of estimating the positions of the plurality of parts of the performer. The estimatingsection 20 estimates the positions of the plurality of parts of the performer in the three-dimensional space, and provides the positions of the plurality of parts of the performer to thecontrol section 30. The estimatingsection 20 can estimate the posture and position of the performer in the three-dimensional space by estimating the positions of the plurality of parts of the performer in the three-dimensional space and coupling two adjacent parts to each other by a straight line (bone). The estimatingsection 20 performs posture estimation processing in real time, that is, performs the posture estimation processing in the same cycles as photographing cycles of thecamera 4, and provides an estimation result to thecontrol section 30. In thecontrol section 30, the obtainingsection 32 obtains position information of the plurality of parts of the body of the performer as a photographing target in predetermined cycles. - In the embodiment, the performer may be a dancer dancing to music, and the posture and position of the performer on the
stage 5 change with the passage of time. Theproduction control section 36 performs light production and/or sound production by controlling thelighting apparatuses 2 and/or thesound apparatuses 3 according to movement of the performer. -
FIG. 4 illustrates a concrete example of the plurality of parts of the performer. In the embodiment, the obtainingsection 32 obtains position information of 19 parts of the body of the performer. Specifically, the obtainingsection 32 obtains position information of anose 50 a, aneck 50 b, aright shoulder 50 c, aright elbow 50 d, aright wrist 50 e, aright hand 50 f, aleft shoulder 50 g, aleft elbow 50 h, a left wrist 50 i, aleft hand 50 j, acentral waist 50 k, a right waist 50 l, aright knee 50 m, aright ankle 50 n, a right toe 50 o, aleft waist 50 p, aleft knee 50 q, aleft ankle 50 r, and aleft toe 50 s. - In a human body model adopted in the embodiment, 19 parts are defined, and for each part, parts adjacent to the part are defined. For example, for the
neck 50 b, thenose 50 a, theright shoulder 50 c, theleft shoulder 50 g, and thecentral waist 50 k are defined as adjacent parts, and two adjacent parts are coupled to each other by a bone. For example, for theright elbow 50 d, theright shoulder 50 c and theright wrist 50 e are defined as adjacent parts. Thus, in the human body model, a plurality of parts and coupling relation between two parts are defined. The estimatingsection 20 estimates the position information of the plurality of parts of the performer on the basis of the human body model. - After the obtaining
section 32 obtains the position information of the plurality of parts of the performer, the derivingsection 34 derives a length between two parts. Theproduction control section 36 performs production on the basis of the derived length between the two parts. -
FIG. 5 illustrates an example of the derived length between the two parts. The derivingsection 34 derives a length L between theright hand 50 f and theleft hand 50 j. Theproduction control section 36 performs production on the basis of the derived length L. The length L may be derived from a three-dimensional coordinate value of theright hand 50 f and a three-dimensional coordinate value of theleft hand 50 j. It is to be noted that theright hand 50 f and theleft hand 50 j are an example and that the derivingsection 34 may derive the length L between two other parts. -
FIG. 6 illustrates another example of the derived length between the two parts. The performer is moving at all times. The length L between theright hand 50 f and theleft hand 50 j therefore changes incessantly. - The
production control section 36 may adjust the volume of music output by thesound apparatuses 3 according to the derived length L. For example, theproduction control section 36 may increase the volume of the music when the length L becomes long, and theproduction control section 36 may decrease the volume of the music when the length L becomes short. Incidentally, theproduction control section 36 may decrease the volume of the music when the length L becomes long, and theproduction control section 36 may increase the volume of the music when the length L becomes short. Theproduction control section 36 may apply a sound effect corresponding to the length L to the music in addition to the volume, and may change the parameter of the sound effect according to the length L. For example, theproduction control section 36 may amplify and emphasize a high frequency range when the length L becomes long, and theproduction control section 36 may amplify and emphasize a low frequency range when the length L becomes short. - The
production control section 36 may adjust amounts of light of thelighting apparatuses 2 according to the derived length L. For example, theproduction control section 36 may increase the light amounts when the length L becomes long, and theproduction control section 36 may decrease the light amounts when the length L becomes short. At this time, theproduction control section 36 may adjust the light amounts of thelighting apparatuses 2 according to the derived length L while controlling the movable units of the plurality oflighting apparatuses 2 so as to irradiate theright hand 50 f or theleft hand 50 j with light. Incidentally, theproduction control section 36 may decrease the light amounts when the length L becomes long, and theproduction control section 36 may increase the light amounts when the length L becomes short. Theproduction control section 36 may adjust the color of the irradiation light in addition to the light amounts, and may change the color of the irradiation light according to the length L. - In the
production system 1 according to the embodiment, it is preferable that the derivingsection 34 derive a length between two parts not adjacent to each other in the human body model and that theproduction control section 36 perform production on the basis of the length between the two parts not adjacent to each other. Theright hand 50 f and theleft hand 50 j described above are an example of the two parts not adjacent to each other in the human body model. The length L between the two parts not adjacent to each other can be changed greatly as compared with the length between two adjacent parts, and is therefore suitable for use as a dynamic production parameter. For example, the derivingsection 34 may derive, as a production parameter, a length between one part of a right half body of a human body and one part of a left half body of the human body. The performer may give performance paying attention to the length between the two parts, for example, give a performance of moving in a left-right direction greatly, by recognizing the two parts set as a production parameter. - In addition, the deriving
section 34 may derive, as a production parameter, a length between one part of an upper half of the human body and one part of a lower half of the human body. The performer may give performance paying attention to the length between the two parts, for example, give a performance of moving in an upward-downward direction greatly, by recognizing the two parts set as a production parameter. - Incidentally, the deriving
section 34 may derive lengths between a predetermined plurality of sets as a production parameter without being limited to the length between one predetermined set (two specific parts), and theproduction control section 36 may perform production on the basis of the length between each set. For example, theproduction control section 36 may perform sound production that controls thesound apparatuses 3 on the basis of a length between parts of a first set, and may perform light production that controls thelighting apparatuses 2 on the basis of a length between parts of a second set different from the first set. - The deriving
section 34 may derive a direction connecting two parts to each other as another production parameter. Theproduction control section 36 performs production on the basis of the derived direction. -
FIG. 7 illustrates an example of the derived direction connecting the two parts to each other. The derivingsection 34 derives a direction vector D connecting theleft elbow 50 h and theleft hand 50 j to each other. Theproduction control section 36 performs production on the basis of the derived direction vector D. When the derivingsection 34 derives the direction vector connecting the two parts to each other, the derivingsection 34 may set a more inward part as viewed from a central part in the human body model as a starting point and set a more outward part as an end point, and determine the direction of the direction vector. When theleft elbow 50 h and theleft hand 50 j are compared with each other, theleft elbow 50 h is closer to the central part than theleft hand 50 j. Theproduction control section 36 therefore derives the direction vector D having theleft elbow 50 h as a starting point and having theleft hand 50 j as an end point. Incidentally, theleft elbow 50 h and theleft hand 50 j are taken as an example, and the derivingsection 34 may derive a direction connecting two other parts to each other. - The
production control section 36 may adjust the irradiation directions of light of thelighting apparatuses 2 according to the derived direction vector D. For example, theproduction control section 36 may control the movable unit of eachlighting apparatus 2 such that the plurality oflighting apparatuses 2 apply light to one point (cross mark on a dotted line inFIG. 7 ) on a half straight line obtained by extending the direction vector D in a direction from the starting point to the end point. The performer can thereby brightly illuminate one point in the direction from the left elbow to the left hand. A distance from the end point of the direction vector D to the one point on the half straight line may be a predetermined distance, or may be set at predetermined times the length of the direction vector. This light production enables the performer to freely manipulate the position irradiated by thelighting apparatuses 2 in the live venue, so that a novel live performance can be realized. - In a case where such light production is performed, it is natural for the performer to specify the position to be irradiated by the
lighting apparatuses 2 by moving an arm. It is therefore preferable for the derivingsection 34 to derive a direction connecting two parts in a left arm or a right arm to each other and use the direction as a production parameter. However, a direction connecting two parts of other than the arm to each other may be derived. - The present technology has been described above on the basis of the embodiment. According to the embodiment, the
production control section 36 can perform production automatically on the basis of a production parameter derived by the derivingsection 34. The present embodiment is illustrative, and it is to be understood by those skilled in the art that combinations of constituent elements and processing processes of the embodiment are susceptible of various modifications and that such modifications also fall within the scope of the present technology. - In the embodiment, the photographing target of the
camera 4 is a live performer. However, the photographing target may be a person other than the live performer. It is to be noted that the photographing target of thecamera 4 may be anything as long as a body model is established and the position information of a plurality of parts can be obtained from a photographed image. The photographing target may, for example, be a human type or pet type autonomous robot. - In the embodiment, description has been made of the
camera 4 as a three-dimensional camera capable of obtaining depth information. However, as illustrated in Japanese Patent Laid-Open No. 2020-204890, thecamera 4 may be a camera that obtains a two-dimensional image not having depth information. - In the embodiment, the obtaining
section 32 derives the length L between two parts as a production parameter. However, for example, a length between one part of the performer and a production apparatus may be derived and used as a production parameter. - In addition, in the embodiment, description has been made of a case where the
production system 1 is used in a venue in which the performer performs in front of an audience. However, in a modification, theproduction system 1 may be used when the performance of the performer is live distribution. In the modification, theproduction control section 36 may perform video production on the basis of a production parameter derived by the derivingsection 34. -
FIG. 8 illustrates an example of video generated by theproduction control section 36. In the modification, theproduction control section 36 obtains an image photographed by thecamera 4, and generatesvideo 60 for distribution. -
FIG. 9 illustrates an example of video in which theproduction control section 36 performs video production. In the present example, theproduction control section 36 performs production (image processing) as if a bolt of lightning streaked between the right hand and the left hand of the performer. Theproduction control section 36 may control the size of the bolt of lightning streaked between the right hand and the left hand of the performer included in thevideo 60 on the basis of the length L between theright hand 50 f and theleft hand 50 j, which is derived by the derivingsection 34. - Incidentally, the
production control section 36 may perform video production on the basis of the direction vector D. In the embodiment, theproduction control section 36 controls the movable units of therespective lighting apparatuses 2 such that the plurality oflighting apparatuses 2 apply light to one point on the half straight line obtained by extending the direction vector D. In the modification, however, theproduction control section 36 may set a virtual sound source that outputs sound at one point on the half straight line obtained by extending the direction vector D, dispose a light bulb at the position of the virtual sound source, and perform video production such that the virtual sound source moves according to movement of an arm of the performer.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/943,283 US20230079835A1 (en) | 2021-09-14 | 2022-09-13 | Information processing device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163243780P | 2021-09-14 | 2021-09-14 | |
US17/943,283 US20230079835A1 (en) | 2021-09-14 | 2022-09-13 | Information processing device |
Publications (1)
Publication Number | Publication Date |
---|---|
US20230079835A1 true US20230079835A1 (en) | 2023-03-16 |
Family
ID=85480197
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/943,283 Pending US20230079835A1 (en) | 2021-09-14 | 2022-09-13 | Information processing device |
Country Status (2)
Country | Link |
---|---|
US (1) | US20230079835A1 (en) |
CN (1) | CN115810216A (en) |
-
2022
- 2022-09-07 CN CN202211090181.2A patent/CN115810216A/en active Pending
- 2022-09-13 US US17/943,283 patent/US20230079835A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
CN115810216A (en) | 2023-03-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101812379B1 (en) | Method and apparatus for estimating a pose | |
CN106251396B (en) | Real-time control method and system for three-dimensional model | |
US20030215130A1 (en) | Method of processing passive optical motion capture data | |
KR101491760B1 (en) | Apparatus and method for providing virtual reality of stage | |
WO2022142702A1 (en) | Video image processing method and apparatus | |
KR20130030117A (en) | Character image processing apparatus and method for footstake clean up in real time animation | |
JP2017531242A (en) | Method and device for editing face image | |
KR20110088361A (en) | Apparatus and method for generating a front face image | |
US20230079835A1 (en) | Information processing device | |
KR20210050787A (en) | Device and method for generating animation based on motion of object | |
CN107783639A (en) | Virtual reality leisure learning system | |
KR20000074633A (en) | Real-time virtual character system | |
WO2021187093A1 (en) | Image processing device and moving-image data generation method | |
Zheng et al. | A model based approach in extracting and generating human motion | |
JP2023508414A (en) | Multi-Drone Visual Content Ingestion System | |
JP2008176696A (en) | Cg character animation creation device | |
Liu et al. | 2.5 D human pose estimation for shadow puppet animation | |
JP2003308532A (en) | Processing method for passive optical motion capture data | |
KR101375708B1 (en) | System and method for motion capture using plural image, and a medium having computer readable program for executing the method | |
KR20040007921A (en) | Animation Method through Auto-Recognition of Facial Expression | |
Barot et al. | Natural head and body orientation for humanoid robots during conversations with moving human partners through motion capture analysis | |
Wang et al. | A markerless body motion capture system for character animation based on multi-view cameras | |
JP2021026265A (en) | Image processing device, image processing program, and image processing method | |
WO2022205167A1 (en) | Image processing method and apparatus, mobile platform, terminal device, and storage medium | |
CN117542123B (en) | Human skeleton resolving method and system based on sparse 6 nodes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SONY GROUP CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:OKUMURA, YASUSHI;KAWAMURA, DAISUKE;BHAT, UDUPI RAMANATH;AND OTHERS;SIGNING DATES FROM 20220720 TO 20220912;REEL/FRAME:061071/0933 Owner name: SONY INTERACTIVE ENTERTAINMENT LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:OKUMURA, YASUSHI;KAWAMURA, DAISUKE;BHAT, UDUPI RAMANATH;AND OTHERS;SIGNING DATES FROM 20220720 TO 20220912;REEL/FRAME:061071/0933 Owner name: SONY INTERACTIVE ENTERTAINMENT INC., JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:OKUMURA, YASUSHI;KAWAMURA, DAISUKE;BHAT, UDUPI RAMANATH;AND OTHERS;SIGNING DATES FROM 20220720 TO 20220912;REEL/FRAME:061071/0933 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |