WO2022172259A1 - Programme, procédé, dispositif de traitement d'informations et système - Google Patents

Programme, procédé, dispositif de traitement d'informations et système Download PDF

Info

Publication number
WO2022172259A1
WO2022172259A1 PCT/IB2022/051444 IB2022051444W WO2022172259A1 WO 2022172259 A1 WO2022172259 A1 WO 2022172259A1 IB 2022051444 W IB2022051444 W IB 2022051444W WO 2022172259 A1 WO2022172259 A1 WO 2022172259A1
Authority
WO
WIPO (PCT)
Prior art keywords
advertisement
image
video
dynamic object
information
Prior art date
Application number
PCT/IB2022/051444
Other languages
English (en)
Japanese (ja)
Inventor
菅谷俊二
Original Assignee
株式会社オプティム
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社オプティム filed Critical 株式会社オプティム
Publication of WO2022172259A1 publication Critical patent/WO2022172259A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F19/00Advertising or display means not otherwise provided for
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel

Definitions

  • the present disclosure relates to programs, methods, information processing devices, and systems.
  • advertisements are physically attached to uniforms worn by players. Advertisements attached to uniforms and the like are fixed and cannot be easily switched like images. As a result, sponsors who provide ads on uniforms cannot change their ads lightly. Also, even if there are other companies that want to display advertisements, they cannot display advertisements if the uniform space is already filled.
  • Japanese Patent Laid-Open No. 2002-200012 combines information related to a subject with a background area near the subject based on positional information of the subject in a free-viewpoint video area generated based on a three-dimensional model of the subject. , a technique for generating a composite video is described.
  • Patent Document 1 describes synthesizing information related to a subject with a background area near the subject, it does not effectively utilize the area on the subject.
  • an object of the present disclosure is to effectively insert an advertisement into a dynamically moving area within a video.
  • a program to be executed by a computer comprising a processor and a memory
  • the program instructs the processor to analyze an image frame by frame and obtain information about a dynamic object included in the image as an object.
  • generating an advertisement image based on the shape, size, and orientation of the area in the dynamic object ascertained from the obtained information; and displaying the generated advertisement image in association with the area.
  • a program is provided for performing the steps of:
  • FIG. 1 is a diagram showing the overall configuration of a system 1;
  • FIG. 3 is a diagram showing a functional configuration of a server 20;
  • FIG. It is a figure which shows the data structure of the advertisement information database 2021 which the server 20 memorize
  • 1 is a diagram showing an overview of devices and the like that constitute system 1.
  • FIG. 4 is a flow chart showing a series of processes in which the server 20 generates an advertisement image and synthesizes the generated advertisement image with a video.
  • 4 is a diagram showing a display example of video managed by the server 20.
  • FIG. 4 is a diagram showing a display example of video distributed from the server 20 and displayed on the terminal device 10.
  • FIG. 4 is a flow chart showing a series of processes in which the server 20 generates an advertisement image and synthesizes the generated advertisement image with a video.
  • 4 is a diagram showing a display example of video managed by the server 20.
  • FIG. 4 is a diagram showing a display example of video distributed from the
  • a system 1 for effectively inserting an advertisement into a dynamic object area whose shape, size, and inclination change in a video imaged using an imaging device or the like will be described.
  • the system 1 displays dynamic objects such as players playing the game, tools used in the game, etc. for a viewer watching a video of a game such as soccer.
  • An advertisement image is superimposed and displayed on a predetermined area in a manner corresponding to the area.
  • the dynamic object represents a moving object on which an advertisement image is superimposed.
  • the advertisement image represents an image of an advertisement.
  • System 1 acquires an image including a dynamic object from the imaging device.
  • the system 1 performs image analysis on the acquired video for each frame and acquires information on the dynamic object included in the image.
  • the system 1 determines the aspect of the advertisement image to be displayed based on the acquired information.
  • the system 1 superimposes an advertisement image in a determined manner on a predetermined area of a dynamic object, such as a person's chest, abdomen, back, shoulder, roof, side, front wing, rear wing, etc. of a moving object. do.
  • System 1 determines advertisements based on video and information about the viewer. As a result, the system 1 can display the advertisement image desired by the viewer on the dynamic object included in the video. Therefore, it is possible to display an arbitrary advertisement in an area in which a fixed advertisement was conventionally displayed, so that the area on the dynamic object can be effectively used.
  • FIG. 1 is a diagram showing the overall configuration of a system 1. As shown in FIG. 1
  • the system 1 includes a terminal device 10, a server 20, an edge server 30, and a photographing device 40.
  • the terminal device 10 , the server 20 and the edge server 30 are connected for communication via the network 80 .
  • the edge server 30 is connected with the imaging device 40 .
  • the imaging device 40 is a transmission/reception device based on a communication standard used in a short-range communication system between information devices.
  • the photographing device 40 receives a beacon signal from another information device equipped with a Bluetooth (registered trademark) module, for example, using a 2.4 GHz band such as a Bluetooth (registered trademark) module.
  • the edge server 30 acquires information transmitted from the imaging device 40 based on the beacon signal using the near field communication. In this way, the photographing device 40 transmits the acquired video to the edge server 30 by short-range communication without going through the network 80 .
  • the edge server 30 may be communicatively connected to the photographing device 40 via the network 80 .
  • the edge server 30 and the imaging device 40 do not necessarily have to be provided.
  • the edge server 30 and the imaging device 40 do not have to be included in the system 1 when pre-captured video is stored in the server 20 .
  • the terminal device 10 is, for example, a device operated by a user viewing video distributed from the server 20 .
  • the terminal device 10 may be, for example, a mobile terminal such as a smart phone or tablet, a stationary PC (Personal Computer), or a laptop PC.
  • the server 20 manages video distribution.
  • the server 20, for example, superimposes an advertisement image according to the viewer on the dynamic object in the video.
  • the server 20 manages the display mode of the advertisement image, such as the size and orientation, for displaying the advertisement image superimposed on the dynamic object.
  • the server 20 manages various types of information about viewers, such as age, address, preferences, and lifestyle habits.
  • the server 20 shown in FIG. 1 has a communication IF 22 , an input/output IF 23 , a memory 25 , a storage 26 and a processor 29 .
  • the communication IF 22 is an interface for inputting and outputting signals for the server 20 to communicate with external devices.
  • the input/output IF 23 functions as an interface with an input device for receiving input operations from the user and an output device for presenting information to the user.
  • the memory 25 temporarily stores programs and data processed by the programs, and is a volatile memory such as a DRAM (Dynamic Random Access Memory).
  • the storage 26 is a storage device for storing data, such as a flash memory or HDD (Hard Disc Drive).
  • the processor 29 is hardware for executing an instruction set described in a program, and is composed of arithmetic units, registers, peripheral circuits, and the like.
  • a collection of devices consisting of a plurality of servers 20 can also be grasped as one information processing device. That is, the system 1 may be formed as an aggregation of a plurality of servers 20. FIG. For example, a different server 20 may take charge of each function exhibited by the server 20, and the data obtained by each server 20 may be aggregated and analyzed by another server 20, or the like.
  • the edge server 30 receives information transmitted from the imaging device 40 and transmits the received information to the server 20 . Also, the edge server 30 transmits information acquired from the server 20 to the imaging device 40 .
  • the information acquired from the server 20 includes, for example, information for updating the settings of the imaging device 40 and the like.
  • the imaging device 40 is a device for receiving light with a light receiving element and outputting it to the edge server 30 as image data.
  • the imaging device 40 is assumed to be, for example, one of the following devices. ⁇ Visible light camera ⁇ Infrared camera ⁇ Ultraviolet camera ⁇ Ultrasonic sensor ⁇ RGB-D camera ⁇ LiDAR (Light Detection and Ranging)
  • FIG. 2 is a diagram showing a functional configuration of the server 20. As shown in FIG. 2 , the server 20 functions as a communication section 201 , a storage section 202 and a control section 203 .
  • the communication unit 201 performs processing for the server 20 to communicate with an external device such as the edge server 30.
  • the storage unit 202 stores data and programs used by the server 20.
  • the storage unit 202 stores an advertisement information database 2021, a viewer information database 2022, a video database 2023, and the like.
  • the advertisement information database 2021 stores various information related to advertisements superimposed and displayed on dynamic objects by the system 1 . Details will be described later.
  • the viewer information database 2022 stores various information about viewers of the video. Details will be described later.
  • the image database 2023 stores images captured by the image capturing device 40 or images captured in advance by another image capturing device.
  • the video database 2023 may store information about videos to be live distributed, for example, information about distribution content, distribution time, and the like.
  • the control unit 203 exhibits functions shown as various modules by the processor of the server 20 performing processing according to a program.
  • the reception control module 2031 controls the process by which the server 20 receives signals from external devices according to the communication protocol.
  • the transmission control module 2032 controls the processing by which the server 20 transmits signals to external devices according to the communication protocol.
  • the advertisement determination module 2033 determines the advertisement to be displayed based on the information of the viewer who is viewing the video. Specifically, for example, the advertisement determination module 2033 extracts information about the viewer from the viewer information database 2022 based on the log-in information of the viewer viewing the video received by the reception control module 2031 . The advertisement determination module 2033 determines, from the advertisement information database 2021, an advertisement to be displayed in the video that the viewer is viewing, based on the extracted information. The determined advertisement may be plural or singular. When the advertisement display target is set in the advertisement information database 2021, the advertisement determination module 2033 may refer to the set target to determine the advertisement.
  • Advertisement determination module 2033 may determine an advertisement to be displayed in the video based on the extracted information and information associated with the video.
  • the information associated with the video is as follows. ⁇ Information about sponsor companies related to the video ⁇ Information related to the content of the video ⁇ Information about the products that appear in the video
  • Algorithms that determine advertisements may use existing algorithms. Also, the advertisement determination module 2033 may determine an advertisement at predetermined timing (a predetermined cycle, a predetermined time, when a predetermined event occurs), for example. At this time, the advertisement determination module 2033 may change the sponsor of the advertisement before and after switching. Also, the advertisement determination module 2033 may change the attributes of advertisements, products, etc. before and after switching.
  • the image analysis module 2034 acquires information for creating an advertisement image by the image generation module 2035 based on the dynamic object included in the video.
  • the image analysis module 2034 obtains information about dynamic objects included in the video according to the manner in which the image generation module 2035 generates the advertisement image.
  • Information about dynamic objects is, for example: ⁇ Region of the dynamic object included in the video ⁇ Predetermined region in the dynamic object included in the video (chest, back, shoulder, front wing, body, rear wing, etc.) ⁇ Movement direction of the dynamic object included in the image ⁇ Shape of the dynamic object included in the image
  • the image analysis module 2034 acquires, for example, a region of the dynamic object included in the video and a predetermined region in the dynamic object using a trained model.
  • a trained model is created by causing a machine learning model to perform machine learning according to a model learning program based on learning data.
  • the learning data receives a plurality of images including dynamic objects as input, and includes predetermined tags, identifiers, etc. assigned to the input images as correct output.
  • Predetermined tags are, for example, dynamic objects such as players, balls, cars, etc.; Including areas such as the rear wing.
  • the image analysis module 2034 inputs, for example, a video to be delivered to the viewer to the trained model and causes it to output regions about the dynamic object and predetermined regions in the dynamic object.
  • the image analysis module 2034 acquires, for example, the moving direction of the dynamic object included in the video by taking the difference between the frames that make up the video.
  • the frames from which the difference is to be calculated may be continuous or separated by a predetermined number of frames.
  • the image analysis module 2034 acquires the shape of the dynamic object included in the video based on the pixel information in the frame.
  • the image analysis module 2034 may obtain the shape of a dynamic object included in the video using a trained model.
  • the correct answer output includes, for example, the shape of the dynamic object, such as a square, a trapezoid, and the like.
  • the correct answer output may be a number designating a preset shape such as shape 1, shape 2, and the like.
  • the image generation module 2035 generates an advertisement image based on the information obtained by the image analysis module 2034 and the advertisement determined by the advertisement determination module 2033.
  • the image generation module 2035 obtains from the image analysis module 2034, the area of the dynamic object included in the video, the predetermined area in the dynamic object, the movement direction of the dynamic object 4. Based on the shape of the dynamic object, obtain the size, shape, surface inclination of the dynamic object, etc. in which the advertising image is synthesized in the dynamic object. The image generation module 2035 determines the size and shape of the image, the inclination of the image in the depth direction, etc., based on the size and shape of the area where the advertisement image is synthesized, the inclination of the surface of the dynamic object, and the like.
  • the image generation module 2035 generates an advertisement image taking into account the size and shape of the image, the inclination of the image in the depth direction, and the like.
  • the image generation module 2035 generates an advertisement image by transforming the original image of the advertisement using, for example, affine transformation.
  • the image generation module 2035 may generate an advertisement image in a mode corresponding to the shape of the dynamic object acquired by the image analysis module 2034, for example. Specifically, for example, conversion rules for the original image of the advertisement are set in advance for each shape of the dynamic object. Image generation module 2035 applies a transformation equation set based on the shape of the dynamic object to the original image for the advertisement. The image generation module 2035 generates the advertisement image by determining the scale of the image to which the transform is applied based on the size of the dynamic object.
  • the image generation module 2035 may, for example, create a three-dimensional model from the image of the dynamic object acquired by the image analysis module 2034, and use the created three-dimensional model to generate an advertisement image. Specifically, for example, the image generation module 2035 acquires two-dimensional images of the dynamic object from multiple directions. Image generation module 2035 creates a three-dimensional model of the dynamic object based on two-dimensional images acquired from multiple directions. The image generation module 2035 pastes the original image of the advertisement determined by the advertisement determination module 2033 onto the surface of the three-dimensional model. The image generation module 2035 generates an advertisement image by executing rendering processing on the original image attached to the 3D model.
  • the advertisements to be superimposed may be different for each type of area in the dynamic object. For example, if the dynamic object is a person, different advertisements may be overlaid on the chest, back, and shoulders. Also, for example, when the dynamic object is an automobile, different advertisements may be superimposed on the front wing, body, and rear wing.
  • the image generation module 2035 generates an advertisement image suitable for the area by transforming the original image of the advertisement corresponding to the detected area according to the aspect of the area.
  • the synthesizing module 2036 synthesizes the advertisement image generated by the image generating module 2035 with the video. Specifically, for example, the synthesizing module 2036 superimposes the advertisement image generated by the image generating module 2035 on the corresponding area in the video.
  • the presentation module 2037 presents the video combined with the advertising content to the viewer.
  • the server 20 can accurately superimpose and present the advertising content to the viewer according to the dynamic objects of various modes included in the video.
  • FIG. 3 is a diagram showing the data structures of the advertisement information database 2021 and the viewer information database 2022 stored by the server 20. As shown in FIG. 3
  • the advertisement information database 2021 includes an item “advertisement ID”, an item “advertiser”, an item “product”, an item “attribute”, an item “target”, and an item “advertisement size”. etc.
  • the item "advertisement ID" is information that identifies each advertisement.
  • the item “advertiser” is information that identifies the advertiser who provides each advertisement. For example, it indicates that the advertiser with the advertisement ID "A001" is "A company”.
  • the item "product” is information indicating the details of the advertisement. For example, it indicates that the product with the advertisement ID “A001” is “tea”.
  • the item "target” indicates the attribute of the viewer targeted for displaying the advertisement. For example, it indicates that the target of the advertisement ID "A001" is "general”.
  • the item "attribute” indicates the attribute given to the advertisement. Specifically, it shows information about what classification each advertisement content belongs to. For example, the attribute of the advertisement ID "A001" indicates "luxury goods". In one aspect, for example, the advertisement determination module 2033 randomly determines one advertisement among the advertisements whose attribute is "luxury goods.”
  • the item "advertisement size” is information indicating the size of the advertisement.
  • the size represents, for example, the size displayed in a predetermined area of the dynamic object. Multiple sizes may be stored for the same advertisement. For example, it indicates that the advertisement size for the product "tea” with the advertisement ID "A001" is "** cm x ** cm", “++ cm x ++ cm", and " ⁇ cm x ⁇ cm".
  • the advertisement size may have a resolution of dpi (dots per inch).
  • the advertisement determination module 2033 may determine the size of the advertisement based on the shape of the dynamic object included in the video, the area within the dynamic object, etc. analyzed by the image analysis module 2034. good.
  • the viewer information database 2022 includes an item “viewer ID”, an item “registered name”, an item “age”, an item “gender”, an item “hobbies”, an item “browsing history” and items such as “purchase history”.
  • the item "viewer ID” is information that identifies each viewer who views the video.
  • the item "registered name” is information that identifies the name of the viewer.
  • the registered name of the viewer ID "U001" is "A”.
  • the registered name may be the name of the viewer, or may be a handle name in a service for viewing video.
  • the item "age” is information indicating the age of the viewer.
  • the item "gender” is information indicating the gender of the viewer.
  • the item “hobby” is information about the viewer's hobby. For example, it indicates that the hobbies of the viewer ID “U001" are “sports", “movies”, “games”, and the like.
  • the server 20 may store the information by receiving an input operation from a user or a viewer, or may estimate the listener's hobby information from information such as browsing history and purchase history.
  • the item "Browsing history” shows information related to the viewer's browsing history of websites, videos of video distribution services, etc.
  • the server 20 stores the viewing history by input operation from the viewer, or by receiving the information from a website, video distribution service, etc. affiliated with the service provided by the system 1 to the viewer. Get information.
  • the viewing history of the viewer ID "U001" indicates "A broadcast", "https://**", "B broadcast”, and the like.
  • the item "purchase history” shows information about the viewer's purchase history on the EC site.
  • the server 20 acquires purchase history information by an input operation from the viewer or by receiving the information from an EC site or the like affiliated with the service provided by the system 1 to the viewer.
  • the purchase history of the viewer ID "U001” indicates "2020/10/01 Dumbbell", “2020/10/3 Protein”, “2020/10/5 Blu-ray", and the like.
  • the server 20 based on various information of the viewer, for example, information such as age, gender, hobbies, websites and video distribution services viewed by the viewer, history of products purchased by the viewer, etc. It becomes possible to determine the optimum advertisement for each. This allows the server 20 to further enhance the user's viewing experience.
  • FIG. 4 is a diagram showing an overview of the system 1. As shown in FIG. In the example shown in FIG. 4, for example, display of an advertisement in a video of a game such as soccer captured by the imaging device 40 will be described.
  • the imaging device 40 captures images including dynamic objects.
  • the imaging device 40 transmits the image including the captured dynamic object to the edge server 30 .
  • the edge server 30 transmits the video including the received dynamic object to the server 20.
  • the server 20 analyzes the video acquired from the edge server 30 frame by frame, and acquires information about dynamic objects in the video. Server 20 generates an advertisement image based on the acquired information. The server 20 synthesizes the generated advertisement image with the video and displays it to the viewer.
  • the server 20 generates an advertisement image based on the size, shape, orientation, etc. of the dynamic object included in each frame from the video including the dynamic object, and moves the generated advertisement image. superimposed on the target object. Therefore, the server 20 can naturally superimpose the advertisement on the dynamic object without making the user feel uncomfortable.
  • a viewer who watches a video for example, in a video about a game such as soccer, sees the video because the advertisement is displayed in a natural way on the player, which is a dynamic object. You can watch the video without feeling uncomfortable, and there is no fear that the feeling of immersion will be reduced.
  • a natural form means that when a dynamic object such as a person changes direction, the advertising content does not protrude from the person, and the advertising content is always displayed according to the movement of the person. indicates that
  • viewers can visually recognize the advertisement that is synthesized with the dynamic object with the same feeling as the advertisement printed on the dynamic object.
  • the server 20 can synthesize the advertisement image with the dynamic object. This allows an advertiser company to display an advertisement that is different from the advertisement physically printed on the dynamic object when distributed through the network, even if the advertisement is physically printed. be able to.
  • the shooting device 40 shoots an image in a preset shooting direction.
  • the imaging direction of the imaging device 40 may include multiple dynamic objects, such as a player wearing a uniform, a soccer ball, and the like.
  • the imaging device 40 transmits a video signal of the captured video to the server 20 via the edge server 30 .
  • FIG. 5 is a flowchart showing a series of processes in which the control unit 203 of the server 20 generates an advertisement image and synthesizes the generated advertisement image with a video.
  • the viewer operates the terminal device 10 and selects the video they want to view.
  • the video to be selected may be a video stored in the video database 2023 or a video distributed in real time.
  • step S501 upon receiving a video selection from the terminal device 10, the control unit 203 of the server 20 determines an advertisement to be displayed based on the selected video and viewer information. Specifically, for example, the control unit 203 uses the advertisement determination module 2033 to identify the viewer based on the viewer's log-in information, and extracts information about the identified viewer from the viewer information database 2022 . Advertisement determination module 2033 determines an appropriate advertisement to display to the viewer, eg, based on information associated with the video selected by the viewer and the extracted viewer information.
  • step S502 the control unit 203 acquires the video selected by the viewer. Specifically, the control unit 203 reads the video selected by the viewer from the video database 2023 . Alternatively, the control unit 203 acquires the video selected by the viewer from a predetermined video distribution source.
  • control unit 203 causes the image analysis module 2034 to perform image analysis, for example, for each video frame.
  • the image analysis module 2034 obtains information about dynamic objects included in the video through image analysis.
  • the control unit 203 causes the image generation module 2035 to generate an advertisement image based on the information acquired by the image analysis module 2034 and the advertisement determined by the advertisement determination module 2033.
  • the control unit 203 for example, based on the information acquired by the image analysis module 2034, the mode of displaying the advertisement for each frame, for example, the size and shape of the advertisement, the inclination of the image in the depth direction, etc. decide.
  • the control unit 203 converts the advertisement determined by the advertisement determination module 2033 based on the determined size, shape, inclination, and the like to generate an advertisement image.
  • step S505 the control unit 203 causes the composition module 2036 to superimpose the advertisement image generated by the image generation module 2035 on the predetermined area in the dynamic object recognized by the image analysis.
  • the control unit 203 executes the processing of steps S503 to S505, for example, for each frame.
  • the server 20 can accurately superimpose an advertisement image corresponding to the viewer on the dynamic object in consideration of the motion of the dynamic object. Therefore, the server 20 can present the advertisement to the viewer in a natural manner without making the viewer feel uncomfortable.
  • control unit 203 determines an advertisement at the start of video viewing has been described as an example.
  • ad decisions are not limited to when a video starts to be viewed.
  • the control unit 203 may determine an advertisement at predetermined timing (predetermined period, predetermined time, occurrence of a predetermined event, etc.). At this time, the control unit 203 may change the advertisement sponsor before and after switching.
  • FIG. 6 is a diagram showing a display example of video managed by the server 20.
  • FIG. FIG. 7 is a diagram showing a display example of video distributed from the server 20 and displayed on the terminal device 10. As shown in FIG.
  • the image shown in FIG. 6 represents an image of a predetermined frame in video.
  • the image shown in FIG. 6 includes dynamic objects 602A, 602B, 602C, 602D, 603A, 603B.
  • Dynamic objects 602A, 602B, 602C, 603A, 603B represent players and dynamic object 602D represents a soccer ball.
  • predetermined regions of the dynamic objects 602A, 602B, 602C, 602D, 603A, and 603B, such as chest regions 604A, 604B, and 604C, back region 605A, shoulder regions 605B, and ball center region 604D include: Advertising images are not composited.
  • the control unit 203 acquires information on the dynamic objects 602A, 602B, 602C, 602D, 603A, and 603B included in the video using the image analysis module 2034.
  • the control unit 203 uses the image generation module 2035 to generate an advertisement image based on the information acquired by the image analysis module 2034 and the advertisement determined by the advertisement determination module 2033 .
  • the control unit 203 converts the advertisement based on, for example, the size, shape, inclination in the depth direction, etc. of the chests 604A, 604B, 604C, the back 605A, the shoulders 605B, and the center of the ball 604D.
  • Advertising images 606A, 606B, 606C, 606D, 607A, 607B are generated.
  • the control unit 203 causes the synthesizing module 2036 to combine the advertisement images 606A, 606B, 606C, 606D, 607A, and 607B generated by the image generating module 2035 into a predetermined region (chest region 604A) in the dynamic object recognized by image analysis. , 604B, 604C, ball center portion 604D, back portion 605A, shoulder portion 605B).
  • the image shown in FIG. 7 is an image when the image shown in FIG. 6 is displayed to a predetermined viewer.
  • advertisement images 606A, 606B, 606C, 606D, 607A, 607B are superimposed on each other.
  • the server 20 can naturally superimpose an advertisement that matches the preferences of each viewer on the video being viewed without giving the user a sense of discomfort. Therefore, the advertiser who provides the advertisement can easily switch the advertisement to be displayed on the dynamic object according to the viewer. Also, even when there are multiple advertisers, it is possible to display advertisements on dynamic objects. In addition to the advertisement actually printed on the dynamic object, it is possible to combine the advertisement with the image to be distributed, so that it is possible to recruit more advertisers than before.
  • control unit 203 may superimpose advertisement images of different types, for example, different sponsors for each team in a competitive competition or the like held by teams.
  • the control unit 203 may change the advertiser of the advertisement between the advertisement images 606A, 606B, 606C and the advertisement images 607A, 607B.
  • the server 20 can display advertisements in such a way as to clearly indicate the sponsors of each team, display advertisements in more effective areas, and so on, so as to improve the effectiveness of advertisements for viewers.
  • the server 20 analyzes an image to acquire information about a dynamic object and determine an area for displaying an advertisement.
  • these processes may be performed by the imaging device 40 or the edge server 30 . That is, for example, the photographing device 40 acquires information about the dynamic object, determines the display area and mode of the advertisement, and transmits the information to the server 20 .
  • the server 20 generates an advertisement image based on the information received from the imaging device 40 and synthesizes the advertisement so as to be superimposed on the dynamic object.
  • the edge server 30 acquires information about the dynamic object, determines the display area and mode of the advertisement, and transmits the information to the server 20 .
  • the server 20 generates an advertisement image based on the information received from the edge server 30 and synthesizes the advertisement to be superimposed on the dynamic object.
  • a case has been described in which a three-dimensional model is created based on images acquired from a plurality of photographing devices 40, and an advertising image is displayed as a texture on the surface of the three-dimensional model.
  • sensors may be attached to the dynamic object.
  • the sensor is a motion sensor that is placed on some area of the body of the dynamic object and detects information about the motion of the body of the placed dynamic object. For example, the sensors detect information such as body tilt, motion speed, motion direction, etc. of the dynamic object.
  • the sensor is implemented by, for example, a gyro sensor, an acceleration sensor, or the like.
  • the sensor transmits various types of detected sensing information to the edge server 30 .
  • the edge server 30 transmits sensing information to the server 20 .
  • the server 20 may create a three-dimensional model based on the received sensing information using the image generation module 2035 .
  • the information about the dynamic object acquired by the image analysis module 2034 includes: ⁇ Region of the dynamic object included in the video ⁇ Predetermined region in the dynamic object included in the video (chest, back, shoulder, front wing, body, rear wing, etc.) ⁇ Movement direction of the dynamic object included in the image ⁇ The shape of the dynamic object included in the image is illustrated.
  • the information acquired by the image analysis module 2034 is not limited to these.
  • the information obtained by the image analysis module 2034 may be: ⁇ A region containing a predetermined color (for example, uniform color, etc.) ⁇ A region containing predetermined characters or marks (for example, a logo mark, etc.) ⁇ An area that has a predetermined positional relationship with a characteristic part (for example, the chest is detected by relative coordinates from the face area)
  • Appendix 1 A program to be executed by a computer 20 comprising a processor 29 and a memory 25, the program causing the processor 29 to analyze an image for each frame and acquire information about a dynamic object included in the image as an object. generating an advertisement image based on the shape, size and orientation of the area in the dynamic object grasped from the acquired information (S503); A program for executing a step (S505) of displaying in association with .
  • the step of determining the advertisement (S501) is executed by the processor 29 at a predetermined timing, and in the step of determining the advertisement (S501), the advertisement is determined so that the advertiser is different based on the information associated with the video. 8.
  • the program according to 8. (Paragraph 0034)
  • Appendix 10 A method for execution by a computer 20 comprising a processor 29 and a memory 25, wherein the processor 29 analyzes a video frame by frame to obtain information about a dynamic object included as a subject in the video. generating an advertisement image based on the shape, size and orientation of the area in the dynamic object grasped from the acquired information (S503); displaying in association with (S505).
  • An information processing apparatus 20 comprising a control unit 203, in which the control unit 203 analyzes an image for each frame and acquires information about a dynamic object included as an object in the image (S503); A step of generating an advertisement image based on the shape, size, and orientation of the area in the dynamic object grasped from the information (S504), and a step of displaying the generated advertisement image in association with the area (S505). and the information processing device 20 that executes.
  • a system comprising: means for generating an advertisement image based on orientation (S504); and means for displaying the generated advertisement image in association with an area (S505).
  • 20 server 22 communication IF, 23 input/output IF, 25 memory, 26 storage, 29 processor, 30 edge server, 40 imaging device, 80 network, 201 communication unit, 202 control unit, 203 communication unit, 2021 advertisement information database, 2022 Viewer information database, 2023 video database.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Marketing (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

La présente invention a pour objet d'insérer une publicité dans une zone d'objet cible mobile qui a une forme et une taille qui changent à l'intérieur d'une image vidéo. À cet effet, la présente invention concerne un programme à exécuter sur un ordinateur qui comprend un processeur et une mémoire. Le programme amène le processeur à exécuter : une étape au cours de laquelle une image vidéo est analysée au niveau de chaque trame et des informations se rapportant à un objet cible mobile, inclus en tant que sujet photographique dans l'image vidéo, sont obtenues ; une étape au cours de laquelle une image de publicité est générée sur la base de la forme, de la taille et de l'orientation de la zone dans l'objet cible mobile telles qu'elles ont été saisies à partir des informations obtenues ; et une étape au cours de laquelle l'image de publicité générée est associée à la région et affichée.
PCT/IB2022/051444 2021-02-12 2022-02-18 Programme, procédé, dispositif de traitement d'informations et système WO2022172259A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-020593 2021-02-12
JP2021020593A JP6942898B1 (ja) 2021-02-12 2021-02-12 プログラム、方法、情報処理装置、システム

Publications (1)

Publication Number Publication Date
WO2022172259A1 true WO2022172259A1 (fr) 2022-08-18

Family

ID=77847087

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2022/051444 WO2022172259A1 (fr) 2021-02-12 2022-02-18 Programme, procédé, dispositif de traitement d'informations et système

Country Status (2)

Country Link
JP (2) JP6942898B1 (fr)
WO (1) WO2022172259A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001283079A (ja) * 2000-03-28 2001-10-12 Sony Corp 通信サービス方法とその装置、通信端末装置、通信システム、広告宣伝方法
JP2009109887A (ja) * 2007-10-31 2009-05-21 Akiji Nagasawa 合成プログラム、記録媒体及び合成装置
WO2019078038A1 (fr) * 2017-10-20 2019-04-25 emmmR株式会社 Système de publicité

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001283079A (ja) * 2000-03-28 2001-10-12 Sony Corp 通信サービス方法とその装置、通信端末装置、通信システム、広告宣伝方法
JP2009109887A (ja) * 2007-10-31 2009-05-21 Akiji Nagasawa 合成プログラム、記録媒体及び合成装置
WO2019078038A1 (fr) * 2017-10-20 2019-04-25 emmmR株式会社 Système de publicité

Also Published As

Publication number Publication date
JP6942898B1 (ja) 2021-09-29
JP2022123816A (ja) 2022-08-24
JP2022123345A (ja) 2022-08-24

Similar Documents

Publication Publication Date Title
US20190030441A1 (en) Using a Portable Device to Interface with a Scene Rendered on a Main Display
US10948982B2 (en) Methods and systems for integrating virtual content into an immersive virtual reality world based on real-world scenery
US10691202B2 (en) Virtual reality system including social graph
US10121513B2 (en) Dynamic image content overlaying
CN107633441A (zh) 追踪识别视频图像中的商品并展示商品信息的方法和装置
US20170286993A1 (en) Methods and Systems for Inserting Promotional Content into an Immersive Virtual Reality World
US10701426B1 (en) Virtual reality system including social graph
US20070273644A1 (en) Personal device with image-acquisition functions for the application of augmented reality resources and method
EP3425483B1 (fr) Dispositif de reconnaissance d'objet intelligent
JP2004145448A (ja) 端末装置、サーバ装置および画像加工方法
CN107911737A (zh) 媒体内容的展示方法、装置、计算设备及存储介质
US20210383579A1 (en) Systems and methods for enhancing live audience experience on electronic device
JP6609078B1 (ja) コンテンツ配信システム、コンテンツ配信方法、およびコンテンツ配信プログラム
JP7316584B2 (ja) オーグメンテーション画像表示方法およびオーグメンテーション画像表示システム
JP6559375B1 (ja) コンテンツ配信システム、コンテンツ配信方法、およびコンテンツ配信プログラム
CN107578306A (zh) 追踪识别视频图像中的商品并展示商品信息的方法和装置
WO2022172259A1 (fr) Programme, procédé, dispositif de traitement d'informations et système
JP2022546664A (ja) 仮想空間における利用者を特定した広告
US20180160093A1 (en) Portable device and operation method thereof
CN114760517B (zh) 图像活动嵌入方法及其装置、设备、介质、产品
CA3171181A1 (fr) Systeme et procede d'analyse de videos en temps reel
JP2021016015A (ja) 広告システム
JP7344084B2 (ja) コンテンツ配信システム、コンテンツ配信方法、およびコンテンツ配信プログラム
US20220207787A1 (en) Method and system for inserting secondary multimedia information relative to primary multimedia information
JP2020150520A (ja) 注目度利活用装置、注目度利活用方法、および注目度利活用プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22752452

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22752452

Country of ref document: EP

Kind code of ref document: A1