US11232605B2 - Method for generating image data, program, and information processing device - Google Patents

Method for generating image data, program, and information processing device Download PDF

Info

Publication number
US11232605B2
US11232605B2 US17/104,111 US202017104111A US11232605B2 US 11232605 B2 US11232605 B2 US 11232605B2 US 202017104111 A US202017104111 A US 202017104111A US 11232605 B2 US11232605 B2 US 11232605B2
Authority
US
United States
Prior art keywords
video image
image
superimposed
area
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US17/104,111
Other versions
US20210158578A1 (en
Inventor
Toshiyuki Sakai
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Seiko Epson Corp
Original Assignee
Seiko Epson Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Seiko Epson Corp filed Critical Seiko Epson Corp
Assigned to SEIKO EPSON CORPORATION reassignment SEIKO EPSON CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SAKAI, TOSHIYUKI
Publication of US20210158578A1 publication Critical patent/US20210158578A1/en
Application granted granted Critical
Publication of US11232605B2 publication Critical patent/US11232605B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0464Positioning
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/12Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
    • G09G2340/125Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels wherein one of the images is motion video
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2354/00Aspects of interface with display user

Definitions

  • the present disclosure relates to a method for generating image data, a program, and an information processing device.
  • JP-A-2010-92402 describes an animation preparation device generating image data of a video image. On receiving an instruction from a user about a movement to be executed by a character, the animation preparation device described in JP-A-2010-92402 generates image data of a video image showing the character executing the movement.
  • a method for generating image data includes: displaying a first object corresponding to a first video image on a display surface; and generating first image data representing a first superimposed video image in which the first video image is superimposed on a first area of a predetermined image, based on a first operation on the first object.
  • the first video image has a time length that is a predetermined time divided by m, m being an integer equal to or greater than 1.
  • the first superimposed video image has a time length that is the predetermined time.
  • the first superimposed video image is a video image in which display of the first video image is executed the m times in a state where the first video image is superimposed on the first area of the predetermined image.
  • a method for generating image data includes: displaying an object corresponding to a predetermined video image on a display surface; generating image data representing a superimposed video image in which the predetermined video image is superimposed on a first area of a predetermined image, based on an operation on the object; and when a first time that is set as a time length of the superimposed video image is different from a second time that is set as a time length of the predetermined video image, changing the time length of the predetermined video image included in the superimposed video image to a third time that is different from the second time.
  • An information processing device includes: a display control unit causing a first object corresponding to a first video image to be displayed on a display surface; and a generation unit generating first image data representing a first superimposed video image in which the first video image is superimposed on a first area of a predetermined image, based on a first operation on the first object.
  • the first video image has a time length that is a predetermined time divided by m, m being an integer equal to or greater than 1.
  • the first superimposed video image has a time length that is the predetermined time.
  • the first superimposed video image is a video image in which display of the first video image is executed the m times in a state where the first video image is superimposed on the first area of the predetermined image.
  • FIG. 1 shows an information processing device 100 according to a first embodiment.
  • FIG. 2 shows an example of the information processing device 100 .
  • FIG. 3 explains an example of a first video image b 1 .
  • FIG. 4 explains another example of the first video image b 1 .
  • FIG. 5 explains an example of a second video image b 2 .
  • FIG. 6 explains an example of a first superimposed video image d 1 .
  • FIG. 7 explains an example of a second superimposed video image d 2 .
  • FIG. 8 is a flowchart for explaining operations of the information processing device 100 .
  • FIG. 1 shows an information processing device 100 according to a first embodiment.
  • a smartphone is shown as an example of the information processing device 100 .
  • the information processing device 100 is not limited to a smartphone.
  • the information processing device 100 may be, for example, a PC (personal computer) or tablet terminal.
  • the information processing device 100 includes a display surface 1 a displaying various images.
  • the display surface 1 a shown in FIG. 1 displays an operation screen e.
  • the information processing device 100 generates image data representing a video image, based on an operation on the display surface 1 a .
  • a time length of the video image is set to 15.0 seconds.
  • 15.0 seconds is an example of a predetermined time.
  • the predetermined time is not limited to 15.0 seconds.
  • the predetermined time may be longer than 0 seconds and shorter than 15.0 seconds.
  • the predetermined time may be longer than 15.0 seconds.
  • the video image represented by the image data is repeatedly displayed, for example, by a display device such as a projector.
  • a display device such as a projector.
  • a person viewing the video image is highly likely to recognize the video image that is repeatedly played, as a seamless video image.
  • Such a video image is used, for example, for a product advertisement or for a light effect to create a certain impression of a product.
  • FIG. 2 shows an example of the information processing device 100 .
  • the information processing device 100 includes a touch panel 1 , a communication device 2 , a storage device 3 , and a processing device 4 .
  • the touch panel 1 is a device in which a display device displaying an image and an input device accepting an operation by a user are integrated together.
  • the touch panel 1 includes the display surface 1 a .
  • the touch panel 1 displays various images on the display surface 1 a .
  • the touch panel 1 detects a touch position, using an electrostatic capacitance specified by an object in contact with the touch panel 1 and the touch panel 1 .
  • the communication device 2 communicates with various devices.
  • the communication device 2 communicates, for example, with a projector 200 via a wireless LAN (local area network).
  • the communication device 2 may communicate with a device such as the projector 200 via a different communication form from wireless LAN.
  • the different communication form from wireless LAN is, for example, wired communication or Bluetooth. Bluetooth is a registered trademark.
  • the projector 200 is an example of a display device.
  • the display device is not limited to a projector and may be a display, for example, an FPD (flat panel display).
  • the FPD is, for example, a liquid crystal display, plasma display, or organic EL (electroluminescence) display.
  • the storage device 3 is a recording medium readable to the processing device 4 .
  • the storage device 3 includes, for example, a non-volatile memory and a volatile memory.
  • the non-volatile memory is, for example, a ROM (read-only memory), EPROM (erasable programmable read-only memory), or EEPROM (electrically erasable programmable read-only memory).
  • the volatile memory is, for example, a RAM.
  • the storage device 3 stores a program executed by the processing device 4 and various data used by the processing device 4 .
  • the program can also be referred to as an “application program”, “application software”, or “app”.
  • the program is acquired, for example, from a server or the like, not illustrated, via the communication device 2 and is subsequently stored in the storage device 3 .
  • the program may be stored in the storage device 3 in advance.
  • the processing device 4 is formed of, for example, a single processor or a plurality of processors.
  • the processing device 4 is formed of a single CPU (central processing unit) or a plurality of CPUs.
  • a part or all of the functions of the processing device 4 may be implemented by a circuit such as a DSP (digital signal processor), ASIC (application-specific integrated circuit), PLD (programmable logic device), or FPGA (field-programmable gate array).
  • the processing device 4 executes various kinds of processing in parallel or in sequence.
  • the processing device 4 reads the program from the storage device 3 .
  • the processing device 4 executes the program read from the storage device 3 and thus implements a display control unit 41 , a generation unit 42 , and an operation control unit 43 .
  • the display control unit 41 controls the touch panel 1 and thus controls the display on the display surface 1 a .
  • the display control unit 41 causes a first object a 1 and a second object a 2 to be displayed on the display surface 1 a , as shown in FIG. 1 .
  • the first object a 1 is made to correspond to a first video image b 1 as illustrated in FIG. 3 .
  • the first video image b 1 can be a component of a video image represented by image data generated by the information processing device 100 .
  • the first video image b 1 can also be referred to as a first component candidate.
  • the first video image b 1 shows a movement of an object.
  • the first video image b 1 illustrated in FIG. 3 is a video image in which a Christmas tree b 11 makes one rotation in the direction of a first arrow b 12 .
  • the first image of the first video image b 1 illustrated in FIG. 3 coincides with the last image of the first video image b 1 illustrated in FIG. 3 .
  • the first video image b 1 is not limited to the video image as illustrated in FIG. 3 .
  • the first video image b 1 may be a video image in which a cloud b 13 moves in the direction of a second arrow b 14 , thus disappears from the video image, subsequently reappears from the left end of the video image, then moves in the direction of the second arrow b 14 , and ultimately turns into the same state as the initial state, as illustrated in FIG. 4 .
  • the first image of the first video image b 1 illustrated in FIG. 4 coincides with the last image of the first video image b 1 illustrated in FIG. 4 .
  • the video image presented by repeatedly displaying the first video image b 1 can be recognized as a seamless video image.
  • the first image of the first video image b 1 may not coincide with the last image of the first video image b 1 .
  • a time length of the first video image b 1 is 15.0 seconds, that is, the time length of the video image represented by the image data generated by the information processing device 100 , divided by m, m being an integer equal to or greater than 1.
  • the time length of the first video image b 1 is, for example, 15.0 seconds, 7.5 seconds, or 5.0 seconds.
  • the first object a 1 is not limited to the configuration illustrated in FIG. 1 and may be, for example, the first image of the first video image b 1 or a letter representing the first video image b 1 .
  • the second object a 2 is made to correspond to a second video image b 2 as illustrated in FIG. 5 .
  • the second video image b 2 can be a component of a video image represented by image data generated by the information processing device 100 .
  • the second video image b 2 can also be referred to as a second component candidate.
  • the second video image b 2 is a different video image from the first video image b 1 .
  • the second video image b 2 illustrated in FIG. 5 is a video image in which a present box b 21 shifts from a stationary state into a vibrating state and subsequently shifts back into the stationary state.
  • the first image of the second video image b 2 illustrated in FIG. 5 coincides with the last image of the second video image b 2 illustrated in FIG. 5 .
  • the second video image b 2 is not limited to the video image of the present box b 21 as illustrated in FIG. 5 and can be suitably changed.
  • the video image presented by repeatedly displaying the second video image b 2 can be recognized as a seamless video image.
  • the first image of the second video image b 2 may not coincide with the last image of the second video image b 2 .
  • a time length of the second video image b 2 is 15.0 seconds, that is, the time length of the video image represented by the image data generated by the information processing device 100 , divided by n, n being an integer equal to or greater than 1.
  • the time length of the second video image b 2 is, for example, 15.0 seconds, 7.5 seconds, or 5.0 seconds.
  • the second object a 2 is not limited to the configuration illustrated in FIG. 1 and may be, for example, the first image of the second video image b 2 or a letter representing the second video image b 2 .
  • the generation unit 42 generates image data representing a video image, based on an operation on the touch panel 1 .
  • the generation unit 42 generates first image data representing a first superimposed video image d 1 as illustrated in FIG. 6 , based on a first operation on the first object a 1 , for example, a touch operation on the first object a 1 by the user.
  • the first video image b 1 is superimposed on a first area c 1 of a background image c.
  • the background image c is a single-color image, for example, a black image.
  • the single-color image is not limited to the black image.
  • the single-color image may be a white image or blue image.
  • the background image c is not limited to the single-color image.
  • the background image c may be an image having a plurality of colors.
  • the background image c may be a still image or video image.
  • the background image c may be preset or may be set by the user.
  • the background image c is an example of a predetermined image.
  • the first image data is an example of the image data generated by the information processing device 100 .
  • the first superimposed video image d 1 is an example of the video image represented by the image data generated by the information processing device 100 .
  • a time length of the first superimposed video image d 1 is 15.0 seconds.
  • the first superimposed video image d 1 is a video image in which the display of the first video image b 1 is executed m times in the state where the first video image b 1 is superimposed on the first area c 1 of the background image c. Therefore, in the first superimposed video image d 1 , the first video image b 1 can be recognized as a seamless video image. Also, the video image presented by repeatedly displaying the first superimposed video image d 1 can be recognized as a seamless video image.
  • the first area c 1 is a partial area of the background image c.
  • the first area c 1 may be the entire area of the background image c.
  • the first area c 1 may be preset or may be set by the user.
  • the generation unit 42 generates second image data representing a second superimposed video image d 2 as illustrated in FIG. 7 , based on the first operation on the first object a 1 and a second operation on the second object a 2 .
  • the first video image b 1 is superimposed on the first area c 1 of the background image c
  • the second video image b 2 is superimposed on a second area c 2 of the background image c.
  • the second image data is another example of the image data generated by the information processing device 100 .
  • the second superimposed video image d 2 is another example of the video image represented by the image data generated by the information processing device 100 .
  • a time length of the second superimposed video image d 2 is 15.0 seconds.
  • the second superimposed video image d 2 is a video image in which the display of the first video image b 1 is executed m times and the display of the second video image b 2 is executed n times in the state where the first video image b 1 is superimposed on the first area c 1 of the background image c and the second video image b 2 is superimposed on the second area c 2 of the background image c. Therefore, in the second superimposed video image d 2 , the first video image b 1 can be recognized as a seamless video image. Also, in the second superimposed video image d 2 , the second video image b 2 can be recognized as a seamless video image. Moreover, the video image presented by repeatedly displaying the second superimposed video image d 2 can be recognized as a seamless video image.
  • the second area c 2 is a partial area of the background image c.
  • the second area c 2 may be the entire area of the background image c.
  • the second area c 2 may be preset or may be set by the user. At least a part of the second area c 2 may overlap at least a part of the first area c 1 .
  • the operation control unit 43 controls various operations. For example, the operation control unit 43 transmits the first image data from the communication device 2 to the projector 200 . The operation control unit 43 also transmits the second image data from the communication device 2 to the projector 200 .
  • FIG. 8 is a flowchart for explaining operations of the information processing device 100 .
  • a specific icon corresponding to the program stored in the storage device 3 is displayed on the display surface 1 a.
  • the touch panel 1 When the user touches the specific icon displayed on the display surface 1 a with a finger, the touch panel 1 outputs touch position information representing the touch position of the finger to the processing device 4 .
  • the processing device 4 reads the program corresponding to the specific icon from the storage device 3 . Subsequently, the processing device 4 executes the program read from the storage device 3 and thus implements the display control unit 41 , the generation unit 42 , and the operation control unit 43 .
  • step S 101 the display control unit 41 provides initial operation image data representing the operation screen e shown in FIG. 1 to the touch panel 1 and thus causes the operation screen e to be displayed on the display surface 1 a.
  • the operation screen e shown in FIG. 1 includes a video image area e 1 , the first object a 1 , the second object a 2 , a complete button e 2 , and a send button e 3 .
  • the video image area e 1 is used to generate a video image.
  • the background image c is displayed.
  • the complete button e 2 is a button for giving an instruction to complete the generation of a video image using the video image area e 1 .
  • the send button e 3 is a button for giving an instruction to transmit image data representing a video image generated in the video image area e 1 .
  • the touch panel 1 outputs touch position information representing the touch position of the finger to the processing device 4 .
  • the touch on the first object a 1 with a finger is an example of the first operation on the first object a 1 .
  • the generation unit 42 in step S 102 determines that a touch operation on the first object a 1 is performed.
  • the generation unit 42 in step S 103 superimposes the first video image b 1 on the first area c 1 as illustrated in FIG. 6 . Therefore, the first video image b 1 is displayed over the background image c.
  • the generation unit 42 first generates first operation image data representing a video image in which the first video image b 1 is superimposed on the first area c 1 , on the operation screen e. Subsequently, the generation unit 42 outputs the first operation image data to the touch panel 1 and thus causes the video image represented by the first operation image data to be displayed on the display surface 1 a.
  • the position of the first area c 1 in the background image c is not limited to the position shown in FIG. 6 .
  • the position of the first area c 1 in the background image c may be set in such a way that the centroid position of the first area c 1 coincides with the centroid position of the background image c.
  • the touch panel 1 outputs touch position information representing the trajectory of the touch position of the finger to the processing device 4 .
  • the display control unit 41 in step S 104 determines that an operation to move the first video image b 1 is performed.
  • the display control unit 41 in step S 105 moves the position of the first video image b 1 , that is, the position of the first area c 1 where the first video image b 1 is displayed, according to the trajectory represented by the touch position information.
  • the display control unit 41 may change the size of the first video image b 1 , that is, the size of the first area c 1 , according to an operation on the first video image b 1 .
  • the display control unit 41 may also change the direction of the first video image b 1 , that is, the direction of the first area c 1 , according to an operation on the first video image b 1 .
  • the touch panel 1 outputs touch position information representing the touch position of the finger to the processing device 4 .
  • the touch position represented by the touch position information is the position of the second object a 2
  • the generation unit 42 in step S 106 determines that a touch operation on the second object a 2 is performed.
  • step S 107 When it is determined that a touch operation on the second object a 2 is performed, the generation unit 42 in step S 107 superimposes the second video image b 2 on the second area c 2 . Therefore, the second video image b 2 is displayed over the background image c.
  • the generation unit 42 in step S 107 superimposes the second video image b 2 on the second area c 2 in the background image c where the first video image b 1 is already located, as illustrated in FIG. 7 .
  • the generation unit 42 first generates second operation image data representing a video image in which the first video image b 1 is superimposed on the first area c 1 and the second video image b 2 is superimposed on the second area c 2 , on the operation screen e.
  • the generation unit 42 outputs the second operation image data to the touch panel 1 and thus causes the video image in which the first video image b 1 is superimposed on the first area c 1 and the second video image b 2 is superimposed on the second area c 2 , to be displayed on the display surface 1 a.
  • the generation unit 42 may or may not make the start timing of the first video image b 1 located in the first area c 1 and the start timing of the second video image b 2 located in the second area c 2 coincide with each other.
  • the first video image b 1 located in the first area c 1 coincides with the start timing of the second video image b 2 located in the second area c 2 . Therefore, the quality of the video image displayed in the video image area e 1 is improved.
  • the generation unit 42 in step S 107 superimposes the second video image b 2 on the second area c 2 without superimposing the first video image b 1 on the first area c 1 .
  • the position of the second area c 2 in the background image c is not limited to the position shown in FIG. 7 .
  • the position of the second area c 2 in the background image c may be set in such a way that the centroid position of the second area c 2 coincides with the centroid position of the background image c.
  • the touch panel 1 outputs touch position information representing the trajectory of the touch position of the finger to the processing device 4 .
  • the display control unit 41 in step S 108 determines that an operation to move the second video image b 2 is performed.
  • the display control unit 41 in step S 109 moves the position of the second video image b 2 , that is, the position of the second area c 2 , according to the trajectory represented by the touch position information.
  • the display control unit 41 may change the size of the second video image b 2 , that is, the size of the second area c 2 , according to an operation on the second video image b 2 .
  • the display control unit 41 may also change the direction of the second video image b 2 , that is, the direction of the second area c 2 , according to an operation on the second video image b 2 .
  • the touch panel 1 outputs touch position information representing the touch position of the finger to the processing device 4 .
  • the touch position represented by the touch position information is the position of the complete button e 2
  • the generation unit 42 in step S 110 determines that a completion operation is performed.
  • the generation unit 42 in step S 111 When it is determined that a completion operation is performed, the generation unit 42 in step S 111 generates image data representing the video image shown in the video image area e 1 . For example, when the first superimposed video image d 1 is shown in the video image area e 1 , the generation unit 42 generates the first image data representing the first superimposed video image d 1 . When the second superimposed video image d 2 is shown in the video image area e 1 , the generation unit 42 generates the second image data representing the second superimposed video image d 2 . The generation unit 42 stores the image data generated in step S 111 into the storage device 3 .
  • the touch panel 1 outputs touch position information representing the touch position of the finger to the processing device 4 .
  • the operation control unit 43 is step S 112 determines that a transmission instruction is given.
  • the operation control unit 43 in step S 113 transmits image data to the projector 200 .
  • the operation control unit 43 first reads image data from the storage device 3 .
  • the storage device 3 stores only one piece of image data, for example, only one piece of first image data or only one piece of second image data
  • the operation control unit 43 reads this image data from the storage device 3 .
  • the operation control unit 43 allows the user to select image data to be transmitted to the projector 200 from among the plurality of pieces of image data, and reads the selected image data from the storage device 3 .
  • the operation control unit 43 causes the communication device 2 to transmit the image data read from the storage device 3 , to the projector 200 .
  • the projector 200 On receiving the first image data, the projector 200 stores the first image data. Subsequently, the projector 200 repeatedly projects the video image represented by the first image data onto a projection target object such as a product. Meanwhile, on receiving the second image data, the projector 200 stores the second image data. Subsequently, the projector 200 repeatedly projects the video image represented by the second image data onto a projection target object.
  • the projection target object is not limited to a product.
  • the projection target object may be an object that is not a product, for example, a projection surface such as a screen or wall.
  • step S 104 the processing proceeds to step S 104 instead of step S 103 .
  • step S 104 When the start position of the trajectory represented by the touch position information is not the position where the first video image b 1 is present in step S 104 described above, the processing proceeds to step S 106 instead of step S 105 .
  • step S 106 When the touch position represented by the touch position information is not the position of the second object a 2 in step S 106 described above, the processing proceeds to step S 108 instead of step S 107 .
  • step S 110 the processing proceeds to step S 110 instead of step S 109 .
  • step S 110 When the touch position represented by the touch position information is not the position of the complete button e 2 in step S 110 described above, the processing proceeds to step S 102 instead of step S 111 .
  • step S 112 When the touch position represented by the touch position information is not the position of the send button e 3 in step S 112 described above, the processing proceeds to step S 102 instead of step S 113 .
  • the method for generating image data, the program, and the information processing device 100 include the configurations described below.
  • the display control unit 41 causes the first object a 1 corresponding to the first video image b 1 to be displayed on the display surface 1 a .
  • the generation unit 42 generates the first image data, based on the first operation on the first object a 1 .
  • the first image data represents the first superimposed video image d 1 , in which the first video image b 1 is superimposed on the first area c 1 of the background image c.
  • the time length of the first video image b 1 is 15.0 seconds divided by m, m being an integer equal to or greater than 1.
  • the time length of the first superimposed video image d 1 is 15.0 seconds.
  • the first superimposed video image d 1 is a video image in which the display of the first video image b 1 is executed m times in the state where the first video image b 1 is superimposed on the first area c 1 of the background image c.
  • the user does not need to input an instruction about every movement to be executed by an object such as a character and therefore can save time and effort.
  • the first image data representing the video image having the predetermined time length specifically, the first image data representing the first superimposed video image d 1 having the time length of 15.0 seconds, can be easily generated, based on a simple operation, that is, the operation on the first object a 1 .
  • the time length of the first superimposed video image d 1 is not limited to 15.0 seconds.
  • the first video image b 1 is displayed m times.
  • the first image of the first superimposed video image d 1 can include the first image of the first video image b 1
  • the last image of the first superimposed video image d 1 can include the last image of the first video image b 1 .
  • the speed of the first video image b 1 can be maintained. Therefore, the first video image b 1 can be displayed in the form as intended by a creator of the first video image b 1 .
  • the video image presented by repeatedly displaying the first video image b 1 can be recognized as a seamless video image.
  • the video image presented by the repeatedly displayed first superimposed video image d 1 can be recognized as a seamless video image.
  • the display control unit 41 also causes the second object a 2 corresponding to the second video image b 2 to be displayed on the display surface 1 a .
  • the generation unit 42 generates the second image data, based on the first operation on the first object a 1 and the second operation on the second object a 2 .
  • the second image data represents the second superimposed video image d 2 , in which the first video image b 1 is superimposed on the first area c 1 of the background image c and the second video image b 2 is superimposed on the second area c 2 of the background image c.
  • the time length of the second video image b 2 is 15.0 seconds divided by n, n being an integer equal to or greater than 1.
  • the time length of the second superimposed video image d 2 is 15.0 seconds.
  • the second superimposed video image d 2 is a video image in which the display of the first video image b 1 is executed m times and the display of the second video image b 2 is executed n times in the state where the first video image b 1 is superimposed on the first area c 1 of the background image c and the second video image b 2 is superimposed on the second area c 2 of the background image c.
  • the second image data representing the video image having the predetermined time length can be easily generated, based on a simple operation, that is, the operation on the first object a 1 and the operation on the second object a 2 .
  • the time length of the second superimposed video image d 2 is not limited to 15.0 seconds.
  • the display of the first video image b 1 is executed m times and the display of the second video image b 2 is executed n times.
  • the first image of the second superimposed video image d 2 can include the first image of the first video image b 1 and the first image of the second video image b 2
  • the last image of the second superimposed video image d 2 can include the last image of the first video image b 1 and the last image of the second video image b 2 .
  • the speed of the second video image b 2 can be maintained. Therefore, the second video image b 2 can be displayed in the form as intended by the creator of the second video image b 2 .
  • the video image presented by repeatedly displaying the second video image b 2 can be recognized as a seamless video image.
  • the video image presented by the repeatedly displayed second superimposed video image d 2 can be recognized as a seamless video image.
  • each of the time length of the first video image b 1 included in the first superimposed video image d 1 , the time length of the first video image b 1 included in the second superimposed video image d 2 , and the time length of the second video image b 2 included in the second superimposed video image d 2 is not changed.
  • each of the time length of the first video image b 1 included in the first superimposed video image d 1 , the time length of the first video image b 1 included in the second superimposed video image d 2 , and the time length of the second video image b 2 included in the second superimposed video image d 2 can be changed.
  • the generation unit 42 changes the time length of the first video image b 1 included in the first superimposed video image d 1 from the second time to a third time that is different from the second time.
  • the third time is the first time divided by p, p being an integer equal to or greater than 1.
  • p is an integer having the smallest difference from the first time divided by the second time, of integers equal to and greater than 1.
  • the first superimposed video image d 1 is a video image in which the display of the first video image b 1 is executed p times in the state where the first video image b 1 is superimposed on the first area c 1 of the background image c.
  • the generation unit 42 decides the value of p as “1”.
  • the generation unit 42 then decides the third time as 15.0 seconds, that is, the first time of 15.0 seconds divided by p of “1”.
  • the first superimposed video image d 1 is a video image in which the display of the first video image b 1 is executed once in the state where the first video image b 1 is superimposed on the first area c 1 of the background image c.
  • the generation unit 42 decides the value of p as “4”.
  • the generation unit 42 then decides the third time as 3.75 seconds, that is, the first time of 15.00 seconds divided by p of “4”.
  • the first superimposed video image d 1 is a video image in which the display of the first video image b 1 is executed four times in the state where the first video image b 1 is superimposed on the first area c 1 of the background image c.
  • the generation unit 42 adjusts the speed of the first video image b 1 in order to change the time length of the first video image b 1 from 4.00 seconds to 3.75 seconds.
  • the generation unit 42 may decides the value of p as the larger one or the smaller one of the two integers.
  • the generation unit 42 can also modify the second superimposed video image d 2 similarly to the first superimposed video image d 1 .
  • the first image of the first superimposed video image d 1 can include the first image of the first video image b 1
  • the last image of the first superimposed video image d 1 can include the last image of the first video image b 1 .
  • the first image of the second superimposed video image d 2 can include the first image of the first video image b 1
  • the last image of the second superimposed video image d 2 can include the last image of the first video image b 1 .
  • the first image of the second superimposed video image d 2 can include the first image of the second video image b 2
  • the last image of the second superimposed video image d 2 can include the last image of the second video image b 2 .
  • the video image presented by the repeatedly displayed first superimposed video image d 1 can be recognized as a seamless video image.
  • the video image presented by the repeatedly displayed second superimposed video image d 2 can be recognized as a seamless video image.
  • Each of the first video image b 1 and the second video image b 2 is an example of a predetermined video image.
  • Each of the first object a 1 and the second object a 2 is an example of an object.
  • Each of the first superimposed video image d 1 and the second superimposed video image d 2 is an example of a superimposed video image.
  • Each of the first image data and the second image data is an example of image data.
  • the positional relationship between the video image area e 1 , the complete button e 2 , the send button e 3 , the first object a 1 , and the second object a 2 is not limited the positional relationship shown in FIG. 1 .
  • Each of the first object a 1 and the second object a 2 may be displayed on a different screen from the screen where the video image area e 1 is displayed.
  • the number of objects corresponding to a video image is not limited to two and may be one, or three or more.
  • a video image corresponding to an object may show a process in which the amount of an object increases or decreases.
  • a video image corresponding to an object may show a process in which there is no snow in an initial state, subsequently snow begins to fall and pile up, and then the piled-up snow is blown by the wind, thus returning to the state where there is no snow.
  • the object is not limited to snow.
  • the object may be water, a leaf of a tree, or a living thing.
  • the generation unit 42 may change the content of the first video image b 1 arranged in the video image area e 1 , based on an operation to change the content of the first video image b 1 arranged in the video image area e 1 .
  • the generation unit 42 changes the direction of rotation of the Christmas tree b 11 into the direction opposite to the direction indicated by the first arrow b 12 .
  • the change in the content of the first video image b 1 is not limited to the change in the direction of rotation.
  • the threshold time is, for example, 3 seconds.
  • the threshold time is not limited to 3 seconds and can be suitably changed.
  • the video presented by repeatedly displaying the first superimposed video image d 1 can be recognized as a seamless video.
  • the generation unit 42 may change the content of the second video image b 2 arranged in the video image area e 1 , based on an operation to change the content of the second video image b 2 arranged in the video image area e 1 .
  • the generation unit 42 changes the color of the present box b 21 .
  • the change in the content of the second video image b 2 is not limited to the change in the color of the present box b 21 .
  • the video presented by repeatedly displaying the second superimposed video image d 2 can be recognized as a seamless video.
  • the display control unit 41 and the generation unit 42 may be provided in a server communicating with a terminal device, instead of in a terminal device such as a smartphone.
  • the server causes the operation screen e to be displayed on the display surface of the terminal device, generates the first image data and the second image data, based on an operation on the operation screen e, and provides the first image data and the second image data to the information processing device 100 such as a smartphone.
  • the server functions as an information processing device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A method for generating image data includes: displaying a first object corresponding to a first video image on a display surface; and generating first image data representing a first superimposed video image in which the first video image is superimposed on a first area of a predetermined image, based on a first operation on the first object. The first video image has a time length that is a predetermined time divided by m, m being an integer equal to or greater than 1. The first superimposed video image has a time length that is the predetermined time. The first superimposed video image is a video image in which display of the first video image is executed the m times in a state where the first video image is superimposed on the first area of the predetermined image.

Description

The present application is based on, and claims priority from JP Application Serial Number 2019-212928, filed Nov. 26, 2019, the disclosure of which is hereby incorporated by reference herein in its entirety.
BACKGROUND 1. Technical Field
The present disclosure relates to a method for generating image data, a program, and an information processing device.
2. Related Art
JP-A-2010-92402 describes an animation preparation device generating image data of a video image. On receiving an instruction from a user about a movement to be executed by a character, the animation preparation device described in JP-A-2010-92402 generates image data of a video image showing the character executing the movement.
When generating image data of a video image using the animation preparation device described in JP-A-2010-92402, the user needs to input an instruction about every movement to be executed by an object such as a character. This process takes time and effort.
SUMMARY
A method for generating image data according to an aspect of the present disclosure includes: displaying a first object corresponding to a first video image on a display surface; and generating first image data representing a first superimposed video image in which the first video image is superimposed on a first area of a predetermined image, based on a first operation on the first object. The first video image has a time length that is a predetermined time divided by m, m being an integer equal to or greater than 1. The first superimposed video image has a time length that is the predetermined time. The first superimposed video image is a video image in which display of the first video image is executed the m times in a state where the first video image is superimposed on the first area of the predetermined image.
A method for generating image data according to another aspect of the present disclosure includes: displaying an object corresponding to a predetermined video image on a display surface; generating image data representing a superimposed video image in which the predetermined video image is superimposed on a first area of a predetermined image, based on an operation on the object; and when a first time that is set as a time length of the superimposed video image is different from a second time that is set as a time length of the predetermined video image, changing the time length of the predetermined video image included in the superimposed video image to a third time that is different from the second time.
An information processing device according to another aspect of the present disclosure includes: a display control unit causing a first object corresponding to a first video image to be displayed on a display surface; and a generation unit generating first image data representing a first superimposed video image in which the first video image is superimposed on a first area of a predetermined image, based on a first operation on the first object. The first video image has a time length that is a predetermined time divided by m, m being an integer equal to or greater than 1. The first superimposed video image has a time length that is the predetermined time. The first superimposed video image is a video image in which display of the first video image is executed the m times in a state where the first video image is superimposed on the first area of the predetermined image.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows an information processing device 100 according to a first embodiment.
FIG. 2 shows an example of the information processing device 100.
FIG. 3 explains an example of a first video image b1.
FIG. 4 explains another example of the first video image b1.
FIG. 5 explains an example of a second video image b2.
FIG. 6 explains an example of a first superimposed video image d1.
FIG. 7 explains an example of a second superimposed video image d2.
FIG. 8 is a flowchart for explaining operations of the information processing device 100.
DESCRIPTION OF EXEMPLARY EMBODIMENTS A. First Embodiment A1. Outline of Information Processing Device 100
FIG. 1 shows an information processing device 100 according to a first embodiment. In FIG. 1, a smartphone is shown as an example of the information processing device 100. The information processing device 100 is not limited to a smartphone. The information processing device 100 may be, for example, a PC (personal computer) or tablet terminal.
The information processing device 100 includes a display surface 1 a displaying various images. The display surface 1 a shown in FIG. 1 displays an operation screen e. The information processing device 100 generates image data representing a video image, based on an operation on the display surface 1 a. A time length of the video image is set to 15.0 seconds. 15.0 seconds is an example of a predetermined time. The predetermined time is not limited to 15.0 seconds. The predetermined time may be longer than 0 seconds and shorter than 15.0 seconds. The predetermined time may be longer than 15.0 seconds.
The video image represented by the image data is repeatedly displayed, for example, by a display device such as a projector. When the first image of the video image represented by the image data coincides with the last image of the video image, a person viewing the video image is highly likely to recognize the video image that is repeatedly played, as a seamless video image. Such a video image is used, for example, for a product advertisement or for a light effect to create a certain impression of a product.
A2. Example of Information Processing Device 100
FIG. 2 shows an example of the information processing device 100. The information processing device 100 includes a touch panel 1, a communication device 2, a storage device 3, and a processing device 4.
The touch panel 1 is a device in which a display device displaying an image and an input device accepting an operation by a user are integrated together. The touch panel 1 includes the display surface 1 a. The touch panel 1 displays various images on the display surface 1 a. The touch panel 1 detects a touch position, using an electrostatic capacitance specified by an object in contact with the touch panel 1 and the touch panel 1.
The communication device 2 communicates with various devices. The communication device 2 communicates, for example, with a projector 200 via a wireless LAN (local area network). The communication device 2 may communicate with a device such as the projector 200 via a different communication form from wireless LAN. The different communication form from wireless LAN is, for example, wired communication or Bluetooth. Bluetooth is a registered trademark.
The projector 200 is an example of a display device. The display device is not limited to a projector and may be a display, for example, an FPD (flat panel display). The FPD is, for example, a liquid crystal display, plasma display, or organic EL (electroluminescence) display.
The storage device 3 is a recording medium readable to the processing device 4. The storage device 3 includes, for example, a non-volatile memory and a volatile memory. The non-volatile memory is, for example, a ROM (read-only memory), EPROM (erasable programmable read-only memory), or EEPROM (electrically erasable programmable read-only memory). The volatile memory is, for example, a RAM.
The storage device 3 stores a program executed by the processing device 4 and various data used by the processing device 4. The program can also be referred to as an “application program”, “application software”, or “app”. The program is acquired, for example, from a server or the like, not illustrated, via the communication device 2 and is subsequently stored in the storage device 3. The program may be stored in the storage device 3 in advance.
The processing device 4 is formed of, for example, a single processor or a plurality of processors. In an example, the processing device 4 is formed of a single CPU (central processing unit) or a plurality of CPUs. A part or all of the functions of the processing device 4 may be implemented by a circuit such as a DSP (digital signal processor), ASIC (application-specific integrated circuit), PLD (programmable logic device), or FPGA (field-programmable gate array). The processing device 4 executes various kinds of processing in parallel or in sequence. The processing device 4 reads the program from the storage device 3. The processing device 4 executes the program read from the storage device 3 and thus implements a display control unit 41, a generation unit 42, and an operation control unit 43.
The display control unit 41 controls the touch panel 1 and thus controls the display on the display surface 1 a. The display control unit 41 causes a first object a1 and a second object a2 to be displayed on the display surface 1 a, as shown in FIG. 1.
The first object a1 is made to correspond to a first video image b1 as illustrated in FIG. 3. The first video image b1 can be a component of a video image represented by image data generated by the information processing device 100. The first video image b1 can also be referred to as a first component candidate.
The first video image b1 shows a movement of an object. The first video image b1 illustrated in FIG. 3 is a video image in which a Christmas tree b11 makes one rotation in the direction of a first arrow b12. The first image of the first video image b1 illustrated in FIG. 3 coincides with the last image of the first video image b1 illustrated in FIG. 3.
The first video image b1 is not limited to the video image as illustrated in FIG. 3. For example, the first video image b1 may be a video image in which a cloud b13 moves in the direction of a second arrow b14, thus disappears from the video image, subsequently reappears from the left end of the video image, then moves in the direction of the second arrow b14, and ultimately turns into the same state as the initial state, as illustrated in FIG. 4. The first image of the first video image b1 illustrated in FIG. 4 coincides with the last image of the first video image b1 illustrated in FIG. 4.
When the first image of the first video image b1 coincides with the last image of the first video image b1, the video image presented by repeatedly displaying the first video image b1 can be recognized as a seamless video image. However, the first image of the first video image b1 may not coincide with the last image of the first video image b1.
A time length of the first video image b1 is 15.0 seconds, that is, the time length of the video image represented by the image data generated by the information processing device 100, divided by m, m being an integer equal to or greater than 1. The time length of the first video image b1 is, for example, 15.0 seconds, 7.5 seconds, or 5.0 seconds. The first object a1 is not limited to the configuration illustrated in FIG. 1 and may be, for example, the first image of the first video image b1 or a letter representing the first video image b1.
The second object a2 is made to correspond to a second video image b2 as illustrated in FIG. 5. Just like the first video image b1, the second video image b2 can be a component of a video image represented by image data generated by the information processing device 100. The second video image b2 can also be referred to as a second component candidate.
The second video image b2 is a different video image from the first video image b1. The second video image b2 illustrated in FIG. 5 is a video image in which a present box b21 shifts from a stationary state into a vibrating state and subsequently shifts back into the stationary state. The first image of the second video image b2 illustrated in FIG. 5 coincides with the last image of the second video image b2 illustrated in FIG. 5. The second video image b2 is not limited to the video image of the present box b21 as illustrated in FIG. 5 and can be suitably changed.
When the first image of the second video image b2 coincides with the last image of the second video image b2, the video image presented by repeatedly displaying the second video image b2 can be recognized as a seamless video image. However, the first image of the second video image b2 may not coincide with the last image of the second video image b2.
A time length of the second video image b2 is 15.0 seconds, that is, the time length of the video image represented by the image data generated by the information processing device 100, divided by n, n being an integer equal to or greater than 1. The time length of the second video image b2 is, for example, 15.0 seconds, 7.5 seconds, or 5.0 seconds. The second object a2 is not limited to the configuration illustrated in FIG. 1 and may be, for example, the first image of the second video image b2 or a letter representing the second video image b2.
The generation unit 42 generates image data representing a video image, based on an operation on the touch panel 1. For example, the generation unit 42 generates first image data representing a first superimposed video image d1 as illustrated in FIG. 6, based on a first operation on the first object a1, for example, a touch operation on the first object a1 by the user. In the first superimposed video image d1 illustrated in FIG. 6, the first video image b1 is superimposed on a first area c1 of a background image c.
The background image c is a single-color image, for example, a black image. The single-color image is not limited to the black image. For example, the single-color image may be a white image or blue image. The background image c is not limited to the single-color image. For example, the background image c may be an image having a plurality of colors. The background image c may be a still image or video image. The background image c may be preset or may be set by the user. The background image c is an example of a predetermined image.
The first image data is an example of the image data generated by the information processing device 100. The first superimposed video image d1 is an example of the video image represented by the image data generated by the information processing device 100. A time length of the first superimposed video image d1 is 15.0 seconds.
The first superimposed video image d1 is a video image in which the display of the first video image b1 is executed m times in the state where the first video image b1 is superimposed on the first area c1 of the background image c. Therefore, in the first superimposed video image d1, the first video image b1 can be recognized as a seamless video image. Also, the video image presented by repeatedly displaying the first superimposed video image d1 can be recognized as a seamless video image.
The first area c1 is a partial area of the background image c. The first area c1 may be the entire area of the background image c. The first area c1 may be preset or may be set by the user.
The generation unit 42 generates second image data representing a second superimposed video image d2 as illustrated in FIG. 7, based on the first operation on the first object a1 and a second operation on the second object a2. In the second superimposed video image d2 illustrated in FIG. 7, the first video image b1 is superimposed on the first area c1 of the background image c and the second video image b2 is superimposed on a second area c2 of the background image c.
The second image data is another example of the image data generated by the information processing device 100. The second superimposed video image d2 is another example of the video image represented by the image data generated by the information processing device 100. A time length of the second superimposed video image d2 is 15.0 seconds.
The second superimposed video image d2 is a video image in which the display of the first video image b1 is executed m times and the display of the second video image b2 is executed n times in the state where the first video image b1 is superimposed on the first area c1 of the background image c and the second video image b2 is superimposed on the second area c2 of the background image c. Therefore, in the second superimposed video image d2, the first video image b1 can be recognized as a seamless video image. Also, in the second superimposed video image d2, the second video image b2 can be recognized as a seamless video image. Moreover, the video image presented by repeatedly displaying the second superimposed video image d2 can be recognized as a seamless video image.
The second area c2 is a partial area of the background image c. The second area c2 may be the entire area of the background image c. The second area c2 may be preset or may be set by the user. At least a part of the second area c2 may overlap at least a part of the first area c1.
The operation control unit 43 controls various operations. For example, the operation control unit 43 transmits the first image data from the communication device 2 to the projector 200. The operation control unit 43 also transmits the second image data from the communication device 2 to the projector 200.
A3. Description of Operations
FIG. 8 is a flowchart for explaining operations of the information processing device 100. In the description below, it is assumed that a specific icon corresponding to the program stored in the storage device 3 is displayed on the display surface 1 a.
When the user touches the specific icon displayed on the display surface 1 a with a finger, the touch panel 1 outputs touch position information representing the touch position of the finger to the processing device 4. When the touch position represented by the touch position information is the position of the specific icon, the processing device 4 reads the program corresponding to the specific icon from the storage device 3. Subsequently, the processing device 4 executes the program read from the storage device 3 and thus implements the display control unit 41, the generation unit 42, and the operation control unit 43.
Next, in step S101, the display control unit 41 provides initial operation image data representing the operation screen e shown in FIG. 1 to the touch panel 1 and thus causes the operation screen e to be displayed on the display surface 1 a.
The operation screen e shown in FIG. 1 includes a video image area e1, the first object a1, the second object a2, a complete button e2, and a send button e3. The video image area e1 is used to generate a video image. In the video image area e1, the background image c is displayed. The complete button e2 is a button for giving an instruction to complete the generation of a video image using the video image area e1. The send button e3 is a button for giving an instruction to transmit image data representing a video image generated in the video image area e1.
Subsequently, when the user touches the first object a1 with a finger, the touch panel 1 outputs touch position information representing the touch position of the finger to the processing device 4. The touch on the first object a1 with a finger is an example of the first operation on the first object a1. When the touch position represented by the touch position information is the position of the first object a1, the generation unit 42 in step S102 determines that a touch operation on the first object a1 is performed.
When it is determined that a touch operation on the first object a1 is performed, the generation unit 42 in step S103 superimposes the first video image b1 on the first area c1 as illustrated in FIG. 6. Therefore, the first video image b1 is displayed over the background image c.
Specifically, the generation unit 42 first generates first operation image data representing a video image in which the first video image b1 is superimposed on the first area c1, on the operation screen e. Subsequently, the generation unit 42 outputs the first operation image data to the touch panel 1 and thus causes the video image represented by the first operation image data to be displayed on the display surface 1 a.
The position of the first area c1 in the background image c is not limited to the position shown in FIG. 6. For example, the position of the first area c1 in the background image c may be set in such a way that the centroid position of the first area c1 coincides with the centroid position of the background image c.
Next, when the user touches the first video image b1 with a finger and subsequently moves the finger in contact with the touch panel 1, the touch panel 1 outputs touch position information representing the trajectory of the touch position of the finger to the processing device 4. When the start position of the trajectory represented by the touch position information is the position where the first video image b1 is present, the display control unit 41 in step S104 determines that an operation to move the first video image b1 is performed.
When it is determined that an operation to move the first video image b1 is performed, the display control unit 41 in step S105 moves the position of the first video image b1, that is, the position of the first area c1 where the first video image b1 is displayed, according to the trajectory represented by the touch position information.
The display control unit 41 may change the size of the first video image b1, that is, the size of the first area c1, according to an operation on the first video image b1. The display control unit 41 may also change the direction of the first video image b1, that is, the direction of the first area c1, according to an operation on the first video image b1.
Next, when the user touches the second object a2 with a finger, the touch panel 1 outputs touch position information representing the touch position of the finger to the processing device 4. When the touch position represented by the touch position information is the position of the second object a2, the generation unit 42 in step S106 determines that a touch operation on the second object a2 is performed.
When it is determined that a touch operation on the second object a2 is performed, the generation unit 42 in step S107 superimposes the second video image b2 on the second area c2. Therefore, the second video image b2 is displayed over the background image c.
When it is determined that a touch operation on the second object a2 is performed after a touch operation on the first object a1, the generation unit 42 in step S107 superimposes the second video image b2 on the second area c2 in the background image c where the first video image b1 is already located, as illustrated in FIG. 7.
Specifically, the generation unit 42 first generates second operation image data representing a video image in which the first video image b1 is superimposed on the first area c1 and the second video image b2 is superimposed on the second area c2, on the operation screen e. Next, the generation unit 42 outputs the second operation image data to the touch panel 1 and thus causes the video image in which the first video image b1 is superimposed on the first area c1 and the second video image b2 is superimposed on the second area c2, to be displayed on the display surface 1 a.
The generation unit 42 may or may not make the start timing of the first video image b1 located in the first area c1 and the start timing of the second video image b2 located in the second area c2 coincide with each other.
When the start timing of the first video image b1 located in the first area c1 coincides with the start timing of the second video image b2 located in the second area c2, the first video image b1 can be synchronized with the second video image b2. Therefore, the quality of the video image displayed in the video image area e1 is improved.
When the start timing of the first video image b1 located in the first area c1 does not coincide with the start timing of the second video image b2 located in the second area c2, the processing of synchronizing the first video image b1 with the second video image b2 can be eliminated.
Meanwhile, when it is determined that a touch operation on the second object a2 is performed in a state where a touch operation on the first object a1 is not performed, the generation unit 42 in step S107 superimposes the second video image b2 on the second area c2 without superimposing the first video image b1 on the first area c1.
The position of the second area c2 in the background image c is not limited to the position shown in FIG. 7. For example, the position of the second area c2 in the background image c may be set in such a way that the centroid position of the second area c2 coincides with the centroid position of the background image c.
Next, when the user touches the second video image b2 with a finger and subsequently moves the finger in contact with the touch panel 1, the touch panel 1 outputs touch position information representing the trajectory of the touch position of the finger to the processing device 4. When the start position of the trajectory represented by the touch position information is the position where the second video image b2 is present, the display control unit 41 in step S108 determines that an operation to move the second video image b2 is performed.
When it is determined that an operation to move the second video image b2 is performed, the display control unit 41 in step S109 moves the position of the second video image b2, that is, the position of the second area c2, according to the trajectory represented by the touch position information.
The display control unit 41 may change the size of the second video image b2, that is, the size of the second area c2, according to an operation on the second video image b2. The display control unit 41 may also change the direction of the second video image b2, that is, the direction of the second area c2, according to an operation on the second video image b2.
Next, when the user touches the complete button e2 with a finger, the touch panel 1 outputs touch position information representing the touch position of the finger to the processing device 4. When the touch position represented by the touch position information is the position of the complete button e2, the generation unit 42 in step S110 determines that a completion operation is performed.
When it is determined that a completion operation is performed, the generation unit 42 in step S111 generates image data representing the video image shown in the video image area e1. For example, when the first superimposed video image d1 is shown in the video image area e1, the generation unit 42 generates the first image data representing the first superimposed video image d1. When the second superimposed video image d2 is shown in the video image area e1, the generation unit 42 generates the second image data representing the second superimposed video image d2. The generation unit 42 stores the image data generated in step S111 into the storage device 3.
Next, when the user touches the send button e3 with a finger, the touch panel 1 outputs touch position information representing the touch position of the finger to the processing device 4. When the touch position represented by the touch position information is the position of the send button e3, the operation control unit 43 is step S112 determines that a transmission instruction is given.
When it is determined that a transmission instruction is given, the operation control unit 43 in step S113 transmits image data to the projector 200. In step S113, the operation control unit 43 first reads image data from the storage device 3. For example, when the storage device 3 stores only one piece of image data, for example, only one piece of first image data or only one piece of second image data, the operation control unit 43 reads this image data from the storage device 3. When the storage device 3 stores a plurality of pieces of image data, the operation control unit 43 allows the user to select image data to be transmitted to the projector 200 from among the plurality of pieces of image data, and reads the selected image data from the storage device 3. Subsequently, the operation control unit 43 causes the communication device 2 to transmit the image data read from the storage device 3, to the projector 200.
On receiving the first image data, the projector 200 stores the first image data. Subsequently, the projector 200 repeatedly projects the video image represented by the first image data onto a projection target object such as a product. Meanwhile, on receiving the second image data, the projector 200 stores the second image data. Subsequently, the projector 200 repeatedly projects the video image represented by the second image data onto a projection target object. The projection target object is not limited to a product. For example, the projection target object may be an object that is not a product, for example, a projection surface such as a screen or wall.
When the touch position represented by the touch position information is not the position of the first object a1 in step S102 described above, the processing proceeds to step S104 instead of step S103.
When the start position of the trajectory represented by the touch position information is not the position where the first video image b1 is present in step S104 described above, the processing proceeds to step S106 instead of step S105.
When the touch position represented by the touch position information is not the position of the second object a2 in step S106 described above, the processing proceeds to step S108 instead of step S107.
When the start position of the trajectory represented by the touch position information is not the position where the second video image b2 is present in step S108 described above, the processing proceeds to step S110 instead of step S109.
When the touch position represented by the touch position information is not the position of the complete button e2 in step S110 described above, the processing proceeds to step S102 instead of step S111.
When the touch position represented by the touch position information is not the position of the send button e3 in step S112 described above, the processing proceeds to step S102 instead of step S113.
A4. Overview of First Embodiment
The method for generating image data, the program, and the information processing device 100 according to this embodiment include the configurations described below.
The display control unit 41 causes the first object a1 corresponding to the first video image b1 to be displayed on the display surface 1 a. The generation unit 42 generates the first image data, based on the first operation on the first object a1. The first image data represents the first superimposed video image d1, in which the first video image b1 is superimposed on the first area c1 of the background image c. The time length of the first video image b1 is 15.0 seconds divided by m, m being an integer equal to or greater than 1. The time length of the first superimposed video image d1 is 15.0 seconds. The first superimposed video image d1 is a video image in which the display of the first video image b1 is executed m times in the state where the first video image b1 is superimposed on the first area c1 of the background image c.
According to this configuration, the user does not need to input an instruction about every movement to be executed by an object such as a character and therefore can save time and effort. The first image data representing the video image having the predetermined time length, specifically, the first image data representing the first superimposed video image d1 having the time length of 15.0 seconds, can be easily generated, based on a simple operation, that is, the operation on the first object a1. The time length of the first superimposed video image d1 is not limited to 15.0 seconds. In the first superimposed video image d1, the first video image b1 is displayed m times. Therefore, the first image of the first superimposed video image d1 can include the first image of the first video image b1, and the last image of the first superimposed video image d1 can include the last image of the first video image b1. In the first superimposed video image d1, the speed of the first video image b1 can be maintained. Therefore, the first video image b1 can be displayed in the form as intended by a creator of the first video image b1.
When the first image of the first video image b1 coincides with the last image of the first video image b1, the video image presented by repeatedly displaying the first video image b1 can be recognized as a seamless video image.
Therefore, when the first superimposed video image d1 is repeatedly displayed using the first image data, the video image presented by the repeatedly displayed first superimposed video image d1 can be recognized as a seamless video image.
The display control unit 41 also causes the second object a2 corresponding to the second video image b2 to be displayed on the display surface 1 a. The generation unit 42 generates the second image data, based on the first operation on the first object a1 and the second operation on the second object a2. The second image data represents the second superimposed video image d2, in which the first video image b1 is superimposed on the first area c1 of the background image c and the second video image b2 is superimposed on the second area c2 of the background image c. The time length of the second video image b2 is 15.0 seconds divided by n, n being an integer equal to or greater than 1. The time length of the second superimposed video image d2 is 15.0 seconds. The second superimposed video image d2 is a video image in which the display of the first video image b1 is executed m times and the display of the second video image b2 is executed n times in the state where the first video image b1 is superimposed on the first area c1 of the background image c and the second video image b2 is superimposed on the second area c2 of the background image c.
According to this configuration, the second image data representing the video image having the predetermined time length, specifically, the second image data representing the second superimposed video image d2 having the time length of 15.0 seconds, can be easily generated, based on a simple operation, that is, the operation on the first object a1 and the operation on the second object a2. The time length of the second superimposed video image d2 is not limited to 15.0 seconds.
In the second superimposed video image d2, the display of the first video image b1 is executed m times and the display of the second video image b2 is executed n times.
Therefore, the first image of the second superimposed video image d2 can include the first image of the first video image b1 and the first image of the second video image b2, and the last image of the second superimposed video image d2 can include the last image of the first video image b1 and the last image of the second video image b2.
In the second superimposed video image d2, the speed of the second video image b2 can be maintained. Therefore, the second video image b2 can be displayed in the form as intended by the creator of the second video image b2.
When the first image of the second video image b2 coincides with the last image of the second video image b2, the video image presented by repeatedly displaying the second video image b2 can be recognized as a seamless video image.
Therefore, when the second superimposed video image d2 is repeatedly displayed using the second image data, the video image presented by the repeatedly displayed second superimposed video image d2 can be recognized as a seamless video image.
B. Modification Examples
Modified configurations of the foregoing embodiment will now be described. Two or more configurations arbitrarily selected from the examples described below may be suitably combined together without contracting each other.
B1. First Modification Example
In the first embodiment, each of the time length of the first video image b1 included in the first superimposed video image d1, the time length of the first video image b1 included in the second superimposed video image d2, and the time length of the second video image b2 included in the second superimposed video image d2 is not changed.
In a first modification example, each of the time length of the first video image b1 included in the first superimposed video image d1, the time length of the first video image b1 included in the second superimposed video image d2, and the time length of the second video image b2 included in the second superimposed video image d2 can be changed.
Specifically, when a value obtained by dividing a first time that is set as the time length of the first superimposed video image d1 by a second time that is set as the time length of the first video image b1 is not an integer, the generation unit 42 changes the time length of the first video image b1 included in the first superimposed video image d1 from the second time to a third time that is different from the second time.
For example, the third time is the first time divided by p, p being an integer equal to or greater than 1. p is an integer having the smallest difference from the first time divided by the second time, of integers equal to and greater than 1. In this case, the first superimposed video image d1 is a video image in which the display of the first video image b1 is executed p times in the state where the first video image b1 is superimposed on the first area c1 of the background image c.
In an example, when the first time is 15.0 seconds and the second time is 12.0 seconds, the integer having the smallest difference from “1.25”, that is, the first time of 15.0 seconds divided by the second time of 12.0 seconds, of integers equal to and greater than 1, is “1”.
Therefore, the generation unit 42 decides the value of p as “1”. The generation unit 42 then decides the third time as 15.0 seconds, that is, the first time of 15.0 seconds divided by p of “1”. In this case, the first superimposed video image d1 is a video image in which the display of the first video image b1 is executed once in the state where the first video image b1 is superimposed on the first area c1 of the background image c.
The generation unit 42 adjusts the speed of the first video image b1 in order to change the time length of the first video image b1 from 12.0 seconds to 15.0 seconds.
When the first time is 15.00 seconds and the second time is 4.00 seconds, the integer having the smallest difference from “3.75”, that is, the first time of 15.00 seconds divided by the second time of 4.00 seconds, of integers equal to or greater than 1, is “4”.
Therefore, the generation unit 42 decides the value of p as “4”. The generation unit 42 then decides the third time as 3.75 seconds, that is, the first time of 15.00 seconds divided by p of “4”. In this case, the first superimposed video image d1 is a video image in which the display of the first video image b1 is executed four times in the state where the first video image b1 is superimposed on the first area c1 of the background image c.
The generation unit 42 adjusts the speed of the first video image b1 in order to change the time length of the first video image b1 from 4.00 seconds to 3.75 seconds.
When there are two integers having the smallest difference from the first time divided by the second time, of integers equal to and greater than 1, the generation unit 42 may decides the value of p as the larger one or the smaller one of the two integers.
The generation unit 42 can also modify the second superimposed video image d2 similarly to the first superimposed video image d1.
According to this configuration, even when the time length of the first superimposed video image d1 is not an integral multiple of the time length of the first video image b1, the first image of the first superimposed video image d1 can include the first image of the first video image b1, and the last image of the first superimposed video image d1 can include the last image of the first video image b1.
Also, even when the time length of the second superimposed video image d2 is not an integral multiple of the time length of the first video image b1, the first image of the second superimposed video image d2 can include the first image of the first video image b1, and the last image of the second superimposed video image d2 can include the last image of the first video image b1.
Moreover, even when the time length of the second superimposed video image d2 is not an integral multiple of the time length of the second video image b2, the first image of the second superimposed video image d2 can include the first image of the second video image b2, and the last image of the second superimposed video image d2 can include the last image of the second video image b2.
Thus, when the first image of the first video image b1 coincides with the last image of the first video image b1, when the first superimposed video image d1 is repeatedly displayed, the video image presented by the repeatedly displayed first superimposed video image d1 can be recognized as a seamless video image.
In the case where the first image of the first video image b1 coincides with the last image of the first video image b1 and the first image of the second video image b2 coincides with the last image of the second video image b2, when the second superimposed video image d2 is repeatedly displayed, the video image presented by the repeatedly displayed second superimposed video image d2 can be recognized as a seamless video image.
Each of the first video image b1 and the second video image b2 is an example of a predetermined video image. Each of the first object a1 and the second object a2 is an example of an object. Each of the first superimposed video image d1 and the second superimposed video image d2 is an example of a superimposed video image. Each of the first image data and the second image data is an example of image data.
B2. Second Modification Example
In the first embodiment and the first modification example, the positional relationship between the video image area e1, the complete button e2, the send button e3, the first object a1, and the second object a2 is not limited the positional relationship shown in FIG. 1. Each of the first object a1 and the second object a2 may be displayed on a different screen from the screen where the video image area e1 is displayed.
B3. Third Modification Example
In the first embodiment and the first and second modification examples, the number of objects corresponding to a video image is not limited to two and may be one, or three or more.
B4. Fourth Modification Example
In the first embodiment and the first to third modification examples, a video image corresponding to an object may show a process in which the amount of an object increases or decreases. For example, a video image corresponding to an object may show a process in which there is no snow in an initial state, subsequently snow begins to fall and pile up, and then the piled-up snow is blown by the wind, thus returning to the state where there is no snow. The object is not limited to snow. For example, the object may be water, a leaf of a tree, or a living thing.
B5. Fifth Modification Example
In the first embodiment and the first to fourth modification examples, the generation unit 42 may change the content of the first video image b1 arranged in the video image area e1, based on an operation to change the content of the first video image b1 arranged in the video image area e1.
For example, when the user keeps touching the first video image b1 arranged in the video image area e1 for a threshold time, the generation unit 42 changes the direction of rotation of the Christmas tree b11 into the direction opposite to the direction indicated by the first arrow b12. The change in the content of the first video image b1 is not limited to the change in the direction of rotation. The threshold time is, for example, 3 seconds. The threshold time is not limited to 3 seconds and can be suitably changed.
Even when the content of the first video image b1 arranged in the video image area e1 is changed, it is desirable that the first image of the first video image b1 coincides with the last image of the first video image b1.
In this case, even when the content of the first video image b1 arranged in the video image area e1 is changed, the video presented by repeatedly displaying the first superimposed video image d1 can be recognized as a seamless video.
Also, in the first embodiment and the first to fourth modification examples, the generation unit 42 may change the content of the second video image b2 arranged in the video image area e1, based on an operation to change the content of the second video image b2 arranged in the video image area e1.
For example, when the user keeps touching the second video image b2 arranged in the video image area e1 for a threshold time, the generation unit 42 changes the color of the present box b21. The change in the content of the second video image b2 is not limited to the change in the color of the present box b21.
Even when the content of the second video image b2 arranged in the video image area e1 is changed, it is desirable that the first image of the second video image b2 coincides with the last image of the second video image b2.
In this case, in the state where the first image of the first video image b1 coincides with the last image of the first video image b1, even when the content of the second video image b2 arranged in the video image area e1 is changed, the video presented by repeatedly displaying the second superimposed video image d2 can be recognized as a seamless video.
B6. Sixth Modification Example
In the first embodiment and the first to fifth modification examples, the display control unit 41 and the generation unit 42 may be provided in a server communicating with a terminal device, instead of in a terminal device such as a smartphone. In this case, the server causes the operation screen e to be displayed on the display surface of the terminal device, generates the first image data and the second image data, based on an operation on the operation screen e, and provides the first image data and the second image data to the information processing device 100 such as a smartphone. In this case, the server functions as an information processing device.

Claims (4)

What is claimed is:
1. A method for generating image data, the method comprising:
displaying a first object corresponding to a first video image on a display surface;
displaying a second object corresponding to a second video image on the display surface; and
generating an image data representing a superimposed video image in which the first video image is superimposed on a first area of a predetermined image and the second video image is superimposed on a second area of the predetermined image, based on a first operation on the first object and a second operation on the second object, wherein
the first video image has a time length that is a predetermined time divided by m, m being an integer equal to or greater than 1,
the second video image has a time length that is the predetermined time divided by n, n being an integer equal to or greater than 1,
the superimposed video image has a time length that is the predetermined time,
the superimposed video image is a video image in which display of the first video image is executed the m times and display of the second video image is executed the n times, and
m and n are different integers.
2. The method for generating image data according to claim 1, wherein
a first image of the first video image coincides with a last image of the first video image.
3. The method for generating image data according to claim 1, wherein
a first image of the second video image coincides with a last image of the second video image.
4. An information processing device comprising:
a display control unit causing a first object corresponding to a first video image to be displayed on a display surface and a second object corresponding to a second video image to be displayed on the display surface; and
a generation unit generating an image data representing a superimposed video image in which the first video image is superimposed on a first area of a predetermined image and the second video image is superimposed on a second area of the predetermined image, based on a first operation on the first object and a second operation on the second object, wherein
the first video image has a time length that is a predetermined time divided by m, m being an integer equal to or greater than 1,
the second video image has a time length that is the predetermined time divided by n, n being an integer equal to or greater than 1,
the superimposed video image has a time length that is the predetermined time,
the superimposed video image is a video image in which display of the first video image is executed the m times and display of the second video image is executed the n times, and
m and n are different integers.
US17/104,111 2019-11-26 2020-11-25 Method for generating image data, program, and information processing device Active US11232605B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2019212928A JP7070533B2 (en) 2019-11-26 2019-11-26 Image data generation method, program and information processing equipment
JP2019-212928 2019-11-26
JPJP2019-212928 2019-11-26

Publications (2)

Publication Number Publication Date
US20210158578A1 US20210158578A1 (en) 2021-05-27
US11232605B2 true US11232605B2 (en) 2022-01-25

Family

ID=75971280

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/104,111 Active US11232605B2 (en) 2019-11-26 2020-11-25 Method for generating image data, program, and information processing device

Country Status (3)

Country Link
US (1) US11232605B2 (en)
JP (1) JP7070533B2 (en)
CN (1) CN112950752B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114090550B (en) * 2022-01-19 2022-11-29 成都博恩思医学机器人有限公司 Robot database construction method and system, electronic device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10108123A (en) 1996-09-26 1998-04-24 Nikon Corp Image playback device
US5963204A (en) 1996-09-20 1999-10-05 Nikon Corporation Electronic camera with reproduction and display of images at the same timing
US6286873B1 (en) * 1998-08-26 2001-09-11 Rufus Butler Seder Visual display device with continuous animation
JP2004248076A (en) 2003-02-14 2004-09-02 Mitsubishi Electric Corp Content display device
JP2007068062A (en) 2005-09-02 2007-03-15 D & M Holdings Inc Promotion device and method
JP2010092402A (en) 2008-10-10 2010-04-22 Square Enix Co Ltd Simple animation creation apparatus
US10379719B2 (en) * 2017-05-16 2019-08-13 Apple Inc. Emoji recording and sending
US20190349625A1 (en) * 2018-05-08 2019-11-14 Gree, Inc. Video distribution system, video distribution method, and storage medium storing video distribution program for distributing video containing animation of character object generated based on motion of actor
JP2019197292A (en) 2018-05-08 2019-11-14 グリー株式会社 Moving image distribution system, moving image distribution method, and moving image distribution program for distributing moving image including animation of character object generated on the basis of movement of actor

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005012407A (en) * 2003-06-18 2005-01-13 Sony Corp Image projection apparatus and image processing method
JP5221576B2 (en) * 2010-03-01 2013-06-26 日本電信電話株式会社 Moving image reproduction display device, moving image reproduction display method, moving image reproduction display program, and recording medium therefor
EP2613552A3 (en) * 2011-11-17 2016-11-09 Axell Corporation Method for moving image reproduction processing and mobile information terminal using the method
JP2013115691A (en) * 2011-11-30 2013-06-10 Jvc Kenwood Corp Imaging apparatus and control program for use in imaging apparatus
JP6201501B2 (en) * 2013-08-07 2017-09-27 辰巳電子工業株式会社 Movie editing apparatus, movie editing method and program
JP6287320B2 (en) * 2014-02-24 2018-03-07 株式会社ニコン Image processing apparatus and image processing program
JP2016100778A (en) * 2014-11-21 2016-05-30 カシオ計算機株式会社 Image processor, image processing method and program
US9888219B1 (en) * 2015-10-09 2018-02-06 Electric Picture Display Systems Adjustable optical mask plate and system for reducing brightness artifact in tiled projection displays
JP6556680B2 (en) * 2016-09-23 2019-08-07 日本電信電話株式会社 VIDEO GENERATION DEVICE, VIDEO GENERATION METHOD, AND PROGRAM
JP2018072760A (en) * 2016-11-04 2018-05-10 キヤノン株式会社 Display unit, display system and control method of display unit
JP6558461B2 (en) * 2018-03-14 2019-08-14 カシオ計算機株式会社 Image processing apparatus, image processing method, and program

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5963204A (en) 1996-09-20 1999-10-05 Nikon Corporation Electronic camera with reproduction and display of images at the same timing
JPH10108123A (en) 1996-09-26 1998-04-24 Nikon Corp Image playback device
US6286873B1 (en) * 1998-08-26 2001-09-11 Rufus Butler Seder Visual display device with continuous animation
JP2004248076A (en) 2003-02-14 2004-09-02 Mitsubishi Electric Corp Content display device
JP2007068062A (en) 2005-09-02 2007-03-15 D & M Holdings Inc Promotion device and method
JP2010092402A (en) 2008-10-10 2010-04-22 Square Enix Co Ltd Simple animation creation apparatus
US10379719B2 (en) * 2017-05-16 2019-08-13 Apple Inc. Emoji recording and sending
US20190349625A1 (en) * 2018-05-08 2019-11-14 Gree, Inc. Video distribution system, video distribution method, and storage medium storing video distribution program for distributing video containing animation of character object generated based on motion of actor
JP2019197292A (en) 2018-05-08 2019-11-14 グリー株式会社 Moving image distribution system, moving image distribution method, and moving image distribution program for distributing moving image including animation of character object generated on the basis of movement of actor

Also Published As

Publication number Publication date
JP7070533B2 (en) 2022-05-18
JP2021086249A (en) 2021-06-03
CN112950752B (en) 2023-06-13
CN112950752A (en) 2021-06-11
US20210158578A1 (en) 2021-05-27

Similar Documents

Publication Publication Date Title
US20210182948A1 (en) Product browsing method and apparatus, device and storage medium
CN106575354B (en) Virtualization of tangible interface objects
CN104115106B (en) Hybrid mobile interaction for native and web apps
US10496357B2 (en) Event latency mitigation and screen selection
US20120063740A1 (en) Method and electronic device for displaying a 3d image using 2d image
EP2796973A1 (en) Method and apparatus for generating a three-dimensional user interface
KR20180008707A (en) Icon display method and apparatus
US12524139B2 (en) Image sharing method and electronic device
EP2899611B1 (en) Electronic device, method, and program for supporting touch panel operation
US20150227291A1 (en) Information processing method and electronic device
US20190064947A1 (en) Display control device, pointer display method, and non-temporary recording medium
JP2016009023A5 (en) Information processing apparatus, control method therefor, display control apparatus, and program
CN107391152B (en) Method for realizing focal point alternate-playing animation effect on Mac
US11232605B2 (en) Method for generating image data, program, and information processing device
CN107783648A (en) interaction method and system
US20150261385A1 (en) Picture signal output apparatus, picture signal output method, program, and display system
KR20160072306A (en) Content Augmentation Method and System using a Smart Pen
CN110737380B (en) Mind map display method, device, storage medium and electronic device
JP6314564B2 (en) Image processing apparatus, image processing method, and program
US11321897B2 (en) Method for generating video data, video data generation device, and program
JP6388844B2 (en) Information processing apparatus, information processing program, information processing method, and information processing system
JP6722240B2 (en) Information processing apparatus, information processing program, information processing method, and information processing system
US20210065409A1 (en) Electronic apparatus and control method thereof
JP5944000B2 (en) Image display system, information terminal, information terminal control method and control program
US20200388244A1 (en) Method of operation of display device and display device

Legal Events

Date Code Title Description
AS Assignment

Owner name: SEIKO EPSON CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SAKAI, TOSHIYUKI;REEL/FRAME:054466/0231

Effective date: 20200924

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4