US11232605B2 - Method for generating image data, program, and information processing device - Google Patents
Method for generating image data, program, and information processing device Download PDFInfo
- Publication number
- US11232605B2 US11232605B2 US17/104,111 US202017104111A US11232605B2 US 11232605 B2 US11232605 B2 US 11232605B2 US 202017104111 A US202017104111 A US 202017104111A US 11232605 B2 US11232605 B2 US 11232605B2
- Authority
- US
- United States
- Prior art keywords
- video image
- image
- superimposed
- area
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/80—2D [Two Dimensional] animation, e.g. using sprites
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/04—Changes in size, position or resolution of an image
- G09G2340/0464—Positioning
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/12—Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
- G09G2340/125—Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels wherein one of the images is motion video
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2354/00—Aspects of interface with display user
Definitions
- the present disclosure relates to a method for generating image data, a program, and an information processing device.
- JP-A-2010-92402 describes an animation preparation device generating image data of a video image. On receiving an instruction from a user about a movement to be executed by a character, the animation preparation device described in JP-A-2010-92402 generates image data of a video image showing the character executing the movement.
- a method for generating image data includes: displaying a first object corresponding to a first video image on a display surface; and generating first image data representing a first superimposed video image in which the first video image is superimposed on a first area of a predetermined image, based on a first operation on the first object.
- the first video image has a time length that is a predetermined time divided by m, m being an integer equal to or greater than 1.
- the first superimposed video image has a time length that is the predetermined time.
- the first superimposed video image is a video image in which display of the first video image is executed the m times in a state where the first video image is superimposed on the first area of the predetermined image.
- a method for generating image data includes: displaying an object corresponding to a predetermined video image on a display surface; generating image data representing a superimposed video image in which the predetermined video image is superimposed on a first area of a predetermined image, based on an operation on the object; and when a first time that is set as a time length of the superimposed video image is different from a second time that is set as a time length of the predetermined video image, changing the time length of the predetermined video image included in the superimposed video image to a third time that is different from the second time.
- An information processing device includes: a display control unit causing a first object corresponding to a first video image to be displayed on a display surface; and a generation unit generating first image data representing a first superimposed video image in which the first video image is superimposed on a first area of a predetermined image, based on a first operation on the first object.
- the first video image has a time length that is a predetermined time divided by m, m being an integer equal to or greater than 1.
- the first superimposed video image has a time length that is the predetermined time.
- the first superimposed video image is a video image in which display of the first video image is executed the m times in a state where the first video image is superimposed on the first area of the predetermined image.
- FIG. 1 shows an information processing device 100 according to a first embodiment.
- FIG. 2 shows an example of the information processing device 100 .
- FIG. 3 explains an example of a first video image b 1 .
- FIG. 4 explains another example of the first video image b 1 .
- FIG. 5 explains an example of a second video image b 2 .
- FIG. 6 explains an example of a first superimposed video image d 1 .
- FIG. 7 explains an example of a second superimposed video image d 2 .
- FIG. 8 is a flowchart for explaining operations of the information processing device 100 .
- FIG. 1 shows an information processing device 100 according to a first embodiment.
- a smartphone is shown as an example of the information processing device 100 .
- the information processing device 100 is not limited to a smartphone.
- the information processing device 100 may be, for example, a PC (personal computer) or tablet terminal.
- the information processing device 100 includes a display surface 1 a displaying various images.
- the display surface 1 a shown in FIG. 1 displays an operation screen e.
- the information processing device 100 generates image data representing a video image, based on an operation on the display surface 1 a .
- a time length of the video image is set to 15.0 seconds.
- 15.0 seconds is an example of a predetermined time.
- the predetermined time is not limited to 15.0 seconds.
- the predetermined time may be longer than 0 seconds and shorter than 15.0 seconds.
- the predetermined time may be longer than 15.0 seconds.
- the video image represented by the image data is repeatedly displayed, for example, by a display device such as a projector.
- a display device such as a projector.
- a person viewing the video image is highly likely to recognize the video image that is repeatedly played, as a seamless video image.
- Such a video image is used, for example, for a product advertisement or for a light effect to create a certain impression of a product.
- FIG. 2 shows an example of the information processing device 100 .
- the information processing device 100 includes a touch panel 1 , a communication device 2 , a storage device 3 , and a processing device 4 .
- the touch panel 1 is a device in which a display device displaying an image and an input device accepting an operation by a user are integrated together.
- the touch panel 1 includes the display surface 1 a .
- the touch panel 1 displays various images on the display surface 1 a .
- the touch panel 1 detects a touch position, using an electrostatic capacitance specified by an object in contact with the touch panel 1 and the touch panel 1 .
- the communication device 2 communicates with various devices.
- the communication device 2 communicates, for example, with a projector 200 via a wireless LAN (local area network).
- the communication device 2 may communicate with a device such as the projector 200 via a different communication form from wireless LAN.
- the different communication form from wireless LAN is, for example, wired communication or Bluetooth. Bluetooth is a registered trademark.
- the projector 200 is an example of a display device.
- the display device is not limited to a projector and may be a display, for example, an FPD (flat panel display).
- the FPD is, for example, a liquid crystal display, plasma display, or organic EL (electroluminescence) display.
- the storage device 3 is a recording medium readable to the processing device 4 .
- the storage device 3 includes, for example, a non-volatile memory and a volatile memory.
- the non-volatile memory is, for example, a ROM (read-only memory), EPROM (erasable programmable read-only memory), or EEPROM (electrically erasable programmable read-only memory).
- the volatile memory is, for example, a RAM.
- the storage device 3 stores a program executed by the processing device 4 and various data used by the processing device 4 .
- the program can also be referred to as an “application program”, “application software”, or “app”.
- the program is acquired, for example, from a server or the like, not illustrated, via the communication device 2 and is subsequently stored in the storage device 3 .
- the program may be stored in the storage device 3 in advance.
- the processing device 4 is formed of, for example, a single processor or a plurality of processors.
- the processing device 4 is formed of a single CPU (central processing unit) or a plurality of CPUs.
- a part or all of the functions of the processing device 4 may be implemented by a circuit such as a DSP (digital signal processor), ASIC (application-specific integrated circuit), PLD (programmable logic device), or FPGA (field-programmable gate array).
- the processing device 4 executes various kinds of processing in parallel or in sequence.
- the processing device 4 reads the program from the storage device 3 .
- the processing device 4 executes the program read from the storage device 3 and thus implements a display control unit 41 , a generation unit 42 , and an operation control unit 43 .
- the display control unit 41 controls the touch panel 1 and thus controls the display on the display surface 1 a .
- the display control unit 41 causes a first object a 1 and a second object a 2 to be displayed on the display surface 1 a , as shown in FIG. 1 .
- the first object a 1 is made to correspond to a first video image b 1 as illustrated in FIG. 3 .
- the first video image b 1 can be a component of a video image represented by image data generated by the information processing device 100 .
- the first video image b 1 can also be referred to as a first component candidate.
- the first video image b 1 shows a movement of an object.
- the first video image b 1 illustrated in FIG. 3 is a video image in which a Christmas tree b 11 makes one rotation in the direction of a first arrow b 12 .
- the first image of the first video image b 1 illustrated in FIG. 3 coincides with the last image of the first video image b 1 illustrated in FIG. 3 .
- the first video image b 1 is not limited to the video image as illustrated in FIG. 3 .
- the first video image b 1 may be a video image in which a cloud b 13 moves in the direction of a second arrow b 14 , thus disappears from the video image, subsequently reappears from the left end of the video image, then moves in the direction of the second arrow b 14 , and ultimately turns into the same state as the initial state, as illustrated in FIG. 4 .
- the first image of the first video image b 1 illustrated in FIG. 4 coincides with the last image of the first video image b 1 illustrated in FIG. 4 .
- the video image presented by repeatedly displaying the first video image b 1 can be recognized as a seamless video image.
- the first image of the first video image b 1 may not coincide with the last image of the first video image b 1 .
- a time length of the first video image b 1 is 15.0 seconds, that is, the time length of the video image represented by the image data generated by the information processing device 100 , divided by m, m being an integer equal to or greater than 1.
- the time length of the first video image b 1 is, for example, 15.0 seconds, 7.5 seconds, or 5.0 seconds.
- the first object a 1 is not limited to the configuration illustrated in FIG. 1 and may be, for example, the first image of the first video image b 1 or a letter representing the first video image b 1 .
- the second object a 2 is made to correspond to a second video image b 2 as illustrated in FIG. 5 .
- the second video image b 2 can be a component of a video image represented by image data generated by the information processing device 100 .
- the second video image b 2 can also be referred to as a second component candidate.
- the second video image b 2 is a different video image from the first video image b 1 .
- the second video image b 2 illustrated in FIG. 5 is a video image in which a present box b 21 shifts from a stationary state into a vibrating state and subsequently shifts back into the stationary state.
- the first image of the second video image b 2 illustrated in FIG. 5 coincides with the last image of the second video image b 2 illustrated in FIG. 5 .
- the second video image b 2 is not limited to the video image of the present box b 21 as illustrated in FIG. 5 and can be suitably changed.
- the video image presented by repeatedly displaying the second video image b 2 can be recognized as a seamless video image.
- the first image of the second video image b 2 may not coincide with the last image of the second video image b 2 .
- a time length of the second video image b 2 is 15.0 seconds, that is, the time length of the video image represented by the image data generated by the information processing device 100 , divided by n, n being an integer equal to or greater than 1.
- the time length of the second video image b 2 is, for example, 15.0 seconds, 7.5 seconds, or 5.0 seconds.
- the second object a 2 is not limited to the configuration illustrated in FIG. 1 and may be, for example, the first image of the second video image b 2 or a letter representing the second video image b 2 .
- the generation unit 42 generates image data representing a video image, based on an operation on the touch panel 1 .
- the generation unit 42 generates first image data representing a first superimposed video image d 1 as illustrated in FIG. 6 , based on a first operation on the first object a 1 , for example, a touch operation on the first object a 1 by the user.
- the first video image b 1 is superimposed on a first area c 1 of a background image c.
- the background image c is a single-color image, for example, a black image.
- the single-color image is not limited to the black image.
- the single-color image may be a white image or blue image.
- the background image c is not limited to the single-color image.
- the background image c may be an image having a plurality of colors.
- the background image c may be a still image or video image.
- the background image c may be preset or may be set by the user.
- the background image c is an example of a predetermined image.
- the first image data is an example of the image data generated by the information processing device 100 .
- the first superimposed video image d 1 is an example of the video image represented by the image data generated by the information processing device 100 .
- a time length of the first superimposed video image d 1 is 15.0 seconds.
- the first superimposed video image d 1 is a video image in which the display of the first video image b 1 is executed m times in the state where the first video image b 1 is superimposed on the first area c 1 of the background image c. Therefore, in the first superimposed video image d 1 , the first video image b 1 can be recognized as a seamless video image. Also, the video image presented by repeatedly displaying the first superimposed video image d 1 can be recognized as a seamless video image.
- the first area c 1 is a partial area of the background image c.
- the first area c 1 may be the entire area of the background image c.
- the first area c 1 may be preset or may be set by the user.
- the generation unit 42 generates second image data representing a second superimposed video image d 2 as illustrated in FIG. 7 , based on the first operation on the first object a 1 and a second operation on the second object a 2 .
- the first video image b 1 is superimposed on the first area c 1 of the background image c
- the second video image b 2 is superimposed on a second area c 2 of the background image c.
- the second image data is another example of the image data generated by the information processing device 100 .
- the second superimposed video image d 2 is another example of the video image represented by the image data generated by the information processing device 100 .
- a time length of the second superimposed video image d 2 is 15.0 seconds.
- the second superimposed video image d 2 is a video image in which the display of the first video image b 1 is executed m times and the display of the second video image b 2 is executed n times in the state where the first video image b 1 is superimposed on the first area c 1 of the background image c and the second video image b 2 is superimposed on the second area c 2 of the background image c. Therefore, in the second superimposed video image d 2 , the first video image b 1 can be recognized as a seamless video image. Also, in the second superimposed video image d 2 , the second video image b 2 can be recognized as a seamless video image. Moreover, the video image presented by repeatedly displaying the second superimposed video image d 2 can be recognized as a seamless video image.
- the second area c 2 is a partial area of the background image c.
- the second area c 2 may be the entire area of the background image c.
- the second area c 2 may be preset or may be set by the user. At least a part of the second area c 2 may overlap at least a part of the first area c 1 .
- the operation control unit 43 controls various operations. For example, the operation control unit 43 transmits the first image data from the communication device 2 to the projector 200 . The operation control unit 43 also transmits the second image data from the communication device 2 to the projector 200 .
- FIG. 8 is a flowchart for explaining operations of the information processing device 100 .
- a specific icon corresponding to the program stored in the storage device 3 is displayed on the display surface 1 a.
- the touch panel 1 When the user touches the specific icon displayed on the display surface 1 a with a finger, the touch panel 1 outputs touch position information representing the touch position of the finger to the processing device 4 .
- the processing device 4 reads the program corresponding to the specific icon from the storage device 3 . Subsequently, the processing device 4 executes the program read from the storage device 3 and thus implements the display control unit 41 , the generation unit 42 , and the operation control unit 43 .
- step S 101 the display control unit 41 provides initial operation image data representing the operation screen e shown in FIG. 1 to the touch panel 1 and thus causes the operation screen e to be displayed on the display surface 1 a.
- the operation screen e shown in FIG. 1 includes a video image area e 1 , the first object a 1 , the second object a 2 , a complete button e 2 , and a send button e 3 .
- the video image area e 1 is used to generate a video image.
- the background image c is displayed.
- the complete button e 2 is a button for giving an instruction to complete the generation of a video image using the video image area e 1 .
- the send button e 3 is a button for giving an instruction to transmit image data representing a video image generated in the video image area e 1 .
- the touch panel 1 outputs touch position information representing the touch position of the finger to the processing device 4 .
- the touch on the first object a 1 with a finger is an example of the first operation on the first object a 1 .
- the generation unit 42 in step S 102 determines that a touch operation on the first object a 1 is performed.
- the generation unit 42 in step S 103 superimposes the first video image b 1 on the first area c 1 as illustrated in FIG. 6 . Therefore, the first video image b 1 is displayed over the background image c.
- the generation unit 42 first generates first operation image data representing a video image in which the first video image b 1 is superimposed on the first area c 1 , on the operation screen e. Subsequently, the generation unit 42 outputs the first operation image data to the touch panel 1 and thus causes the video image represented by the first operation image data to be displayed on the display surface 1 a.
- the position of the first area c 1 in the background image c is not limited to the position shown in FIG. 6 .
- the position of the first area c 1 in the background image c may be set in such a way that the centroid position of the first area c 1 coincides with the centroid position of the background image c.
- the touch panel 1 outputs touch position information representing the trajectory of the touch position of the finger to the processing device 4 .
- the display control unit 41 in step S 104 determines that an operation to move the first video image b 1 is performed.
- the display control unit 41 in step S 105 moves the position of the first video image b 1 , that is, the position of the first area c 1 where the first video image b 1 is displayed, according to the trajectory represented by the touch position information.
- the display control unit 41 may change the size of the first video image b 1 , that is, the size of the first area c 1 , according to an operation on the first video image b 1 .
- the display control unit 41 may also change the direction of the first video image b 1 , that is, the direction of the first area c 1 , according to an operation on the first video image b 1 .
- the touch panel 1 outputs touch position information representing the touch position of the finger to the processing device 4 .
- the touch position represented by the touch position information is the position of the second object a 2
- the generation unit 42 in step S 106 determines that a touch operation on the second object a 2 is performed.
- step S 107 When it is determined that a touch operation on the second object a 2 is performed, the generation unit 42 in step S 107 superimposes the second video image b 2 on the second area c 2 . Therefore, the second video image b 2 is displayed over the background image c.
- the generation unit 42 in step S 107 superimposes the second video image b 2 on the second area c 2 in the background image c where the first video image b 1 is already located, as illustrated in FIG. 7 .
- the generation unit 42 first generates second operation image data representing a video image in which the first video image b 1 is superimposed on the first area c 1 and the second video image b 2 is superimposed on the second area c 2 , on the operation screen e.
- the generation unit 42 outputs the second operation image data to the touch panel 1 and thus causes the video image in which the first video image b 1 is superimposed on the first area c 1 and the second video image b 2 is superimposed on the second area c 2 , to be displayed on the display surface 1 a.
- the generation unit 42 may or may not make the start timing of the first video image b 1 located in the first area c 1 and the start timing of the second video image b 2 located in the second area c 2 coincide with each other.
- the first video image b 1 located in the first area c 1 coincides with the start timing of the second video image b 2 located in the second area c 2 . Therefore, the quality of the video image displayed in the video image area e 1 is improved.
- the generation unit 42 in step S 107 superimposes the second video image b 2 on the second area c 2 without superimposing the first video image b 1 on the first area c 1 .
- the position of the second area c 2 in the background image c is not limited to the position shown in FIG. 7 .
- the position of the second area c 2 in the background image c may be set in such a way that the centroid position of the second area c 2 coincides with the centroid position of the background image c.
- the touch panel 1 outputs touch position information representing the trajectory of the touch position of the finger to the processing device 4 .
- the display control unit 41 in step S 108 determines that an operation to move the second video image b 2 is performed.
- the display control unit 41 in step S 109 moves the position of the second video image b 2 , that is, the position of the second area c 2 , according to the trajectory represented by the touch position information.
- the display control unit 41 may change the size of the second video image b 2 , that is, the size of the second area c 2 , according to an operation on the second video image b 2 .
- the display control unit 41 may also change the direction of the second video image b 2 , that is, the direction of the second area c 2 , according to an operation on the second video image b 2 .
- the touch panel 1 outputs touch position information representing the touch position of the finger to the processing device 4 .
- the touch position represented by the touch position information is the position of the complete button e 2
- the generation unit 42 in step S 110 determines that a completion operation is performed.
- the generation unit 42 in step S 111 When it is determined that a completion operation is performed, the generation unit 42 in step S 111 generates image data representing the video image shown in the video image area e 1 . For example, when the first superimposed video image d 1 is shown in the video image area e 1 , the generation unit 42 generates the first image data representing the first superimposed video image d 1 . When the second superimposed video image d 2 is shown in the video image area e 1 , the generation unit 42 generates the second image data representing the second superimposed video image d 2 . The generation unit 42 stores the image data generated in step S 111 into the storage device 3 .
- the touch panel 1 outputs touch position information representing the touch position of the finger to the processing device 4 .
- the operation control unit 43 is step S 112 determines that a transmission instruction is given.
- the operation control unit 43 in step S 113 transmits image data to the projector 200 .
- the operation control unit 43 first reads image data from the storage device 3 .
- the storage device 3 stores only one piece of image data, for example, only one piece of first image data or only one piece of second image data
- the operation control unit 43 reads this image data from the storage device 3 .
- the operation control unit 43 allows the user to select image data to be transmitted to the projector 200 from among the plurality of pieces of image data, and reads the selected image data from the storage device 3 .
- the operation control unit 43 causes the communication device 2 to transmit the image data read from the storage device 3 , to the projector 200 .
- the projector 200 On receiving the first image data, the projector 200 stores the first image data. Subsequently, the projector 200 repeatedly projects the video image represented by the first image data onto a projection target object such as a product. Meanwhile, on receiving the second image data, the projector 200 stores the second image data. Subsequently, the projector 200 repeatedly projects the video image represented by the second image data onto a projection target object.
- the projection target object is not limited to a product.
- the projection target object may be an object that is not a product, for example, a projection surface such as a screen or wall.
- step S 104 the processing proceeds to step S 104 instead of step S 103 .
- step S 104 When the start position of the trajectory represented by the touch position information is not the position where the first video image b 1 is present in step S 104 described above, the processing proceeds to step S 106 instead of step S 105 .
- step S 106 When the touch position represented by the touch position information is not the position of the second object a 2 in step S 106 described above, the processing proceeds to step S 108 instead of step S 107 .
- step S 110 the processing proceeds to step S 110 instead of step S 109 .
- step S 110 When the touch position represented by the touch position information is not the position of the complete button e 2 in step S 110 described above, the processing proceeds to step S 102 instead of step S 111 .
- step S 112 When the touch position represented by the touch position information is not the position of the send button e 3 in step S 112 described above, the processing proceeds to step S 102 instead of step S 113 .
- the method for generating image data, the program, and the information processing device 100 include the configurations described below.
- the display control unit 41 causes the first object a 1 corresponding to the first video image b 1 to be displayed on the display surface 1 a .
- the generation unit 42 generates the first image data, based on the first operation on the first object a 1 .
- the first image data represents the first superimposed video image d 1 , in which the first video image b 1 is superimposed on the first area c 1 of the background image c.
- the time length of the first video image b 1 is 15.0 seconds divided by m, m being an integer equal to or greater than 1.
- the time length of the first superimposed video image d 1 is 15.0 seconds.
- the first superimposed video image d 1 is a video image in which the display of the first video image b 1 is executed m times in the state where the first video image b 1 is superimposed on the first area c 1 of the background image c.
- the user does not need to input an instruction about every movement to be executed by an object such as a character and therefore can save time and effort.
- the first image data representing the video image having the predetermined time length specifically, the first image data representing the first superimposed video image d 1 having the time length of 15.0 seconds, can be easily generated, based on a simple operation, that is, the operation on the first object a 1 .
- the time length of the first superimposed video image d 1 is not limited to 15.0 seconds.
- the first video image b 1 is displayed m times.
- the first image of the first superimposed video image d 1 can include the first image of the first video image b 1
- the last image of the first superimposed video image d 1 can include the last image of the first video image b 1 .
- the speed of the first video image b 1 can be maintained. Therefore, the first video image b 1 can be displayed in the form as intended by a creator of the first video image b 1 .
- the video image presented by repeatedly displaying the first video image b 1 can be recognized as a seamless video image.
- the video image presented by the repeatedly displayed first superimposed video image d 1 can be recognized as a seamless video image.
- the display control unit 41 also causes the second object a 2 corresponding to the second video image b 2 to be displayed on the display surface 1 a .
- the generation unit 42 generates the second image data, based on the first operation on the first object a 1 and the second operation on the second object a 2 .
- the second image data represents the second superimposed video image d 2 , in which the first video image b 1 is superimposed on the first area c 1 of the background image c and the second video image b 2 is superimposed on the second area c 2 of the background image c.
- the time length of the second video image b 2 is 15.0 seconds divided by n, n being an integer equal to or greater than 1.
- the time length of the second superimposed video image d 2 is 15.0 seconds.
- the second superimposed video image d 2 is a video image in which the display of the first video image b 1 is executed m times and the display of the second video image b 2 is executed n times in the state where the first video image b 1 is superimposed on the first area c 1 of the background image c and the second video image b 2 is superimposed on the second area c 2 of the background image c.
- the second image data representing the video image having the predetermined time length can be easily generated, based on a simple operation, that is, the operation on the first object a 1 and the operation on the second object a 2 .
- the time length of the second superimposed video image d 2 is not limited to 15.0 seconds.
- the display of the first video image b 1 is executed m times and the display of the second video image b 2 is executed n times.
- the first image of the second superimposed video image d 2 can include the first image of the first video image b 1 and the first image of the second video image b 2
- the last image of the second superimposed video image d 2 can include the last image of the first video image b 1 and the last image of the second video image b 2 .
- the speed of the second video image b 2 can be maintained. Therefore, the second video image b 2 can be displayed in the form as intended by the creator of the second video image b 2 .
- the video image presented by repeatedly displaying the second video image b 2 can be recognized as a seamless video image.
- the video image presented by the repeatedly displayed second superimposed video image d 2 can be recognized as a seamless video image.
- each of the time length of the first video image b 1 included in the first superimposed video image d 1 , the time length of the first video image b 1 included in the second superimposed video image d 2 , and the time length of the second video image b 2 included in the second superimposed video image d 2 is not changed.
- each of the time length of the first video image b 1 included in the first superimposed video image d 1 , the time length of the first video image b 1 included in the second superimposed video image d 2 , and the time length of the second video image b 2 included in the second superimposed video image d 2 can be changed.
- the generation unit 42 changes the time length of the first video image b 1 included in the first superimposed video image d 1 from the second time to a third time that is different from the second time.
- the third time is the first time divided by p, p being an integer equal to or greater than 1.
- p is an integer having the smallest difference from the first time divided by the second time, of integers equal to and greater than 1.
- the first superimposed video image d 1 is a video image in which the display of the first video image b 1 is executed p times in the state where the first video image b 1 is superimposed on the first area c 1 of the background image c.
- the generation unit 42 decides the value of p as “1”.
- the generation unit 42 then decides the third time as 15.0 seconds, that is, the first time of 15.0 seconds divided by p of “1”.
- the first superimposed video image d 1 is a video image in which the display of the first video image b 1 is executed once in the state where the first video image b 1 is superimposed on the first area c 1 of the background image c.
- the generation unit 42 decides the value of p as “4”.
- the generation unit 42 then decides the third time as 3.75 seconds, that is, the first time of 15.00 seconds divided by p of “4”.
- the first superimposed video image d 1 is a video image in which the display of the first video image b 1 is executed four times in the state where the first video image b 1 is superimposed on the first area c 1 of the background image c.
- the generation unit 42 adjusts the speed of the first video image b 1 in order to change the time length of the first video image b 1 from 4.00 seconds to 3.75 seconds.
- the generation unit 42 may decides the value of p as the larger one or the smaller one of the two integers.
- the generation unit 42 can also modify the second superimposed video image d 2 similarly to the first superimposed video image d 1 .
- the first image of the first superimposed video image d 1 can include the first image of the first video image b 1
- the last image of the first superimposed video image d 1 can include the last image of the first video image b 1 .
- the first image of the second superimposed video image d 2 can include the first image of the first video image b 1
- the last image of the second superimposed video image d 2 can include the last image of the first video image b 1 .
- the first image of the second superimposed video image d 2 can include the first image of the second video image b 2
- the last image of the second superimposed video image d 2 can include the last image of the second video image b 2 .
- the video image presented by the repeatedly displayed first superimposed video image d 1 can be recognized as a seamless video image.
- the video image presented by the repeatedly displayed second superimposed video image d 2 can be recognized as a seamless video image.
- Each of the first video image b 1 and the second video image b 2 is an example of a predetermined video image.
- Each of the first object a 1 and the second object a 2 is an example of an object.
- Each of the first superimposed video image d 1 and the second superimposed video image d 2 is an example of a superimposed video image.
- Each of the first image data and the second image data is an example of image data.
- the positional relationship between the video image area e 1 , the complete button e 2 , the send button e 3 , the first object a 1 , and the second object a 2 is not limited the positional relationship shown in FIG. 1 .
- Each of the first object a 1 and the second object a 2 may be displayed on a different screen from the screen where the video image area e 1 is displayed.
- the number of objects corresponding to a video image is not limited to two and may be one, or three or more.
- a video image corresponding to an object may show a process in which the amount of an object increases or decreases.
- a video image corresponding to an object may show a process in which there is no snow in an initial state, subsequently snow begins to fall and pile up, and then the piled-up snow is blown by the wind, thus returning to the state where there is no snow.
- the object is not limited to snow.
- the object may be water, a leaf of a tree, or a living thing.
- the generation unit 42 may change the content of the first video image b 1 arranged in the video image area e 1 , based on an operation to change the content of the first video image b 1 arranged in the video image area e 1 .
- the generation unit 42 changes the direction of rotation of the Christmas tree b 11 into the direction opposite to the direction indicated by the first arrow b 12 .
- the change in the content of the first video image b 1 is not limited to the change in the direction of rotation.
- the threshold time is, for example, 3 seconds.
- the threshold time is not limited to 3 seconds and can be suitably changed.
- the video presented by repeatedly displaying the first superimposed video image d 1 can be recognized as a seamless video.
- the generation unit 42 may change the content of the second video image b 2 arranged in the video image area e 1 , based on an operation to change the content of the second video image b 2 arranged in the video image area e 1 .
- the generation unit 42 changes the color of the present box b 21 .
- the change in the content of the second video image b 2 is not limited to the change in the color of the present box b 21 .
- the video presented by repeatedly displaying the second superimposed video image d 2 can be recognized as a seamless video.
- the display control unit 41 and the generation unit 42 may be provided in a server communicating with a terminal device, instead of in a terminal device such as a smartphone.
- the server causes the operation screen e to be displayed on the display surface of the terminal device, generates the first image data and the second image data, based on an operation on the operation screen e, and provides the first image data and the second image data to the information processing device 100 such as a smartphone.
- the server functions as an information processing device.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
- Controls And Circuits For Display Device (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
Claims (4)
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2019212928A JP7070533B2 (en) | 2019-11-26 | 2019-11-26 | Image data generation method, program and information processing equipment |
| JP2019-212928 | 2019-11-26 | ||
| JPJP2019-212928 | 2019-11-26 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20210158578A1 US20210158578A1 (en) | 2021-05-27 |
| US11232605B2 true US11232605B2 (en) | 2022-01-25 |
Family
ID=75971280
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/104,111 Active US11232605B2 (en) | 2019-11-26 | 2020-11-25 | Method for generating image data, program, and information processing device |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US11232605B2 (en) |
| JP (1) | JP7070533B2 (en) |
| CN (1) | CN112950752B (en) |
Families Citing this family (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN114090550B (en) * | 2022-01-19 | 2022-11-29 | 成都博恩思医学机器人有限公司 | Robot database construction method and system, electronic device and storage medium |
Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH10108123A (en) | 1996-09-26 | 1998-04-24 | Nikon Corp | Image playback device |
| US5963204A (en) | 1996-09-20 | 1999-10-05 | Nikon Corporation | Electronic camera with reproduction and display of images at the same timing |
| US6286873B1 (en) * | 1998-08-26 | 2001-09-11 | Rufus Butler Seder | Visual display device with continuous animation |
| JP2004248076A (en) | 2003-02-14 | 2004-09-02 | Mitsubishi Electric Corp | Content display device |
| JP2007068062A (en) | 2005-09-02 | 2007-03-15 | D & M Holdings Inc | Promotion device and method |
| JP2010092402A (en) | 2008-10-10 | 2010-04-22 | Square Enix Co Ltd | Simple animation creation apparatus |
| US10379719B2 (en) * | 2017-05-16 | 2019-08-13 | Apple Inc. | Emoji recording and sending |
| US20190349625A1 (en) * | 2018-05-08 | 2019-11-14 | Gree, Inc. | Video distribution system, video distribution method, and storage medium storing video distribution program for distributing video containing animation of character object generated based on motion of actor |
| JP2019197292A (en) | 2018-05-08 | 2019-11-14 | グリー株式会社 | Moving image distribution system, moving image distribution method, and moving image distribution program for distributing moving image including animation of character object generated on the basis of movement of actor |
Family Cites Families (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2005012407A (en) * | 2003-06-18 | 2005-01-13 | Sony Corp | Image projection apparatus and image processing method |
| JP5221576B2 (en) * | 2010-03-01 | 2013-06-26 | 日本電信電話株式会社 | Moving image reproduction display device, moving image reproduction display method, moving image reproduction display program, and recording medium therefor |
| EP2613552A3 (en) * | 2011-11-17 | 2016-11-09 | Axell Corporation | Method for moving image reproduction processing and mobile information terminal using the method |
| JP2013115691A (en) * | 2011-11-30 | 2013-06-10 | Jvc Kenwood Corp | Imaging apparatus and control program for use in imaging apparatus |
| JP6201501B2 (en) * | 2013-08-07 | 2017-09-27 | 辰巳電子工業株式会社 | Movie editing apparatus, movie editing method and program |
| JP6287320B2 (en) * | 2014-02-24 | 2018-03-07 | 株式会社ニコン | Image processing apparatus and image processing program |
| JP2016100778A (en) * | 2014-11-21 | 2016-05-30 | カシオ計算機株式会社 | Image processor, image processing method and program |
| US9888219B1 (en) * | 2015-10-09 | 2018-02-06 | Electric Picture Display Systems | Adjustable optical mask plate and system for reducing brightness artifact in tiled projection displays |
| JP6556680B2 (en) * | 2016-09-23 | 2019-08-07 | 日本電信電話株式会社 | VIDEO GENERATION DEVICE, VIDEO GENERATION METHOD, AND PROGRAM |
| JP2018072760A (en) * | 2016-11-04 | 2018-05-10 | キヤノン株式会社 | Display unit, display system and control method of display unit |
| JP6558461B2 (en) * | 2018-03-14 | 2019-08-14 | カシオ計算機株式会社 | Image processing apparatus, image processing method, and program |
-
2019
- 2019-11-26 JP JP2019212928A patent/JP7070533B2/en active Active
-
2020
- 2020-11-24 CN CN202011327743.1A patent/CN112950752B/en active Active
- 2020-11-25 US US17/104,111 patent/US11232605B2/en active Active
Patent Citations (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5963204A (en) | 1996-09-20 | 1999-10-05 | Nikon Corporation | Electronic camera with reproduction and display of images at the same timing |
| JPH10108123A (en) | 1996-09-26 | 1998-04-24 | Nikon Corp | Image playback device |
| US6286873B1 (en) * | 1998-08-26 | 2001-09-11 | Rufus Butler Seder | Visual display device with continuous animation |
| JP2004248076A (en) | 2003-02-14 | 2004-09-02 | Mitsubishi Electric Corp | Content display device |
| JP2007068062A (en) | 2005-09-02 | 2007-03-15 | D & M Holdings Inc | Promotion device and method |
| JP2010092402A (en) | 2008-10-10 | 2010-04-22 | Square Enix Co Ltd | Simple animation creation apparatus |
| US10379719B2 (en) * | 2017-05-16 | 2019-08-13 | Apple Inc. | Emoji recording and sending |
| US20190349625A1 (en) * | 2018-05-08 | 2019-11-14 | Gree, Inc. | Video distribution system, video distribution method, and storage medium storing video distribution program for distributing video containing animation of character object generated based on motion of actor |
| JP2019197292A (en) | 2018-05-08 | 2019-11-14 | グリー株式会社 | Moving image distribution system, moving image distribution method, and moving image distribution program for distributing moving image including animation of character object generated on the basis of movement of actor |
Also Published As
| Publication number | Publication date |
|---|---|
| JP7070533B2 (en) | 2022-05-18 |
| JP2021086249A (en) | 2021-06-03 |
| CN112950752B (en) | 2023-06-13 |
| CN112950752A (en) | 2021-06-11 |
| US20210158578A1 (en) | 2021-05-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20210182948A1 (en) | Product browsing method and apparatus, device and storage medium | |
| CN106575354B (en) | Virtualization of tangible interface objects | |
| CN104115106B (en) | Hybrid mobile interaction for native and web apps | |
| US10496357B2 (en) | Event latency mitigation and screen selection | |
| US20120063740A1 (en) | Method and electronic device for displaying a 3d image using 2d image | |
| EP2796973A1 (en) | Method and apparatus for generating a three-dimensional user interface | |
| KR20180008707A (en) | Icon display method and apparatus | |
| US12524139B2 (en) | Image sharing method and electronic device | |
| EP2899611B1 (en) | Electronic device, method, and program for supporting touch panel operation | |
| US20150227291A1 (en) | Information processing method and electronic device | |
| US20190064947A1 (en) | Display control device, pointer display method, and non-temporary recording medium | |
| JP2016009023A5 (en) | Information processing apparatus, control method therefor, display control apparatus, and program | |
| CN107391152B (en) | Method for realizing focal point alternate-playing animation effect on Mac | |
| US11232605B2 (en) | Method for generating image data, program, and information processing device | |
| CN107783648A (en) | interaction method and system | |
| US20150261385A1 (en) | Picture signal output apparatus, picture signal output method, program, and display system | |
| KR20160072306A (en) | Content Augmentation Method and System using a Smart Pen | |
| CN110737380B (en) | Mind map display method, device, storage medium and electronic device | |
| JP6314564B2 (en) | Image processing apparatus, image processing method, and program | |
| US11321897B2 (en) | Method for generating video data, video data generation device, and program | |
| JP6388844B2 (en) | Information processing apparatus, information processing program, information processing method, and information processing system | |
| JP6722240B2 (en) | Information processing apparatus, information processing program, information processing method, and information processing system | |
| US20210065409A1 (en) | Electronic apparatus and control method thereof | |
| JP5944000B2 (en) | Image display system, information terminal, information terminal control method and control program | |
| US20200388244A1 (en) | Method of operation of display device and display device |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: SEIKO EPSON CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SAKAI, TOSHIYUKI;REEL/FRAME:054466/0231 Effective date: 20200924 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT RECEIVED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |