WO2022054988A1 - Terminal, son procédé de commande, et support d'enregistrement dans lequel est enregistré un programme pour la mise en œuvre du procédé - Google Patents

Terminal, son procédé de commande, et support d'enregistrement dans lequel est enregistré un programme pour la mise en œuvre du procédé Download PDF

Info

Publication number
WO2022054988A1
WO2022054988A1 PCT/KR2020/012243 KR2020012243W WO2022054988A1 WO 2022054988 A1 WO2022054988 A1 WO 2022054988A1 KR 2020012243 W KR2020012243 W KR 2020012243W WO 2022054988 A1 WO2022054988 A1 WO 2022054988A1
Authority
WO
WIPO (PCT)
Prior art keywords
content
recording
main content
additional content
insert point
Prior art date
Application number
PCT/KR2020/012243
Other languages
English (en)
Korean (ko)
Inventor
윤철민
박선민
Original Assignee
주식회사 인에이블와우
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 인에이블와우 filed Critical 주식회사 인에이블와우
Priority to PCT/KR2020/012243 priority Critical patent/WO2022054988A1/fr
Publication of WO2022054988A1 publication Critical patent/WO2022054988A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera

Definitions

  • the present disclosure relates to an input for selecting an insert point on the main content in a state in which the main content is output by performing additional content association with the main content based on the shooting/recording of the main content and the insertion point in the capturing/recording step of the content.
  • a terminal for outputting additional content in response to and a method for controlling the same.
  • the pre-stored main content and the pre-stored addition are performed in the stage where the content shooting/recording is already completed rather than the content shooting/recording stage.
  • Separate programming was required to link the content based on the insert point.
  • a separate task of linking contents after shooting/recording is required, and new programming is required every time linking is performed. Accordingly, efficiency is reduced in terms of time and cost, and there is a cumbersome aspect.
  • shooting/recording of the main content and linking of the main content and the additional content are performed in parallel, thereby omitting a separate programming process after storage.
  • shooting/recording of main content and shooting/recording of additional content can be simultaneously performed, so that a plurality of contents can be shot/recorded simultaneously or separately.
  • the terminal of the present disclosure includes a camera; a display for outputting a live area generated by processing data inputted from the camera in real time; In response to a first user input for selecting the first location of the live area, the main content is captured through the camera, and the additionally captured or recorded first additional content is subsequently recorded at an insert point on the main content.
  • a control unit for controlling to be linked wherein the control unit controls the output of the first additional content in response to a second user input for selecting an insert point on the main content in a state in which the main content is output, , the position of the insert point on the main content may be set to be the same as the selected first position.
  • the control unit selects a second location of the live area while the display outputs the live area for photographing or recording the first additional content after the main content is captured in response to a third user input to perform shooting or recording of the first additional content through the camera, and subsequently capturing or recording second additional content to be linked to an insert point on the first additional content and the control unit, in a state in which the first additional content is output, controls to output the second additional content in response to a third user input for selecting an insert point on the first additional content, and
  • the position of the insert point on the first additional content may be set to be the same as the selected second position.
  • the camera includes a first camera unit for wide-angle photographing or recording, and a second camera unit for enlarged photographing or recording, and in response to the first user input, photographing the main content and capturing or recording of the first additional content, wherein the capturing of the main content is performed through any one of the first camera unit and the second camera unit, and the photographing or recording of the first additional content is , can be done through the other one.
  • the first user input is to touch one of at least one candidate, and the at least one candidate is set based on an insert point set in a second main content different from the main content.
  • the first additional content may be an enlarged photographing or recording of a focused portion on the live area.
  • the main content in response to a second user input for selecting a plurality of positions of the live area, the main content is captured through the camera, and the first additional content is additionally captured or recorded subsequently.
  • a control unit for controlling to be linked to an insert point on the main content, wherein the control unit selects an insert point associated with the first additional content on the main content in a state in which the main content is output
  • the first additional content may be controlled to be output, and the position of the insert point associated with the first additional content on the main content may be set to be the same as any one of the plurality of positions.
  • any one of the plurality of first locations may have the highest matching rate with the first additional content among things corresponding to each of the plurality of first locations.
  • the first user input is a touch input for dragging in one direction on the live area
  • a location first touched by the touch input is selected as the first location
  • the drag direction of the touch input is selected as the first location.
  • the first additional content may be set as a still picture type
  • the first additional content may be set as a moving picture type.
  • the terminal of the present disclosure includes a camera; a display for outputting a live area generated by processing data inputted from the camera in real time; In response to a first user input for selecting the first location of the live area, recording of the main content of the video type is performed through the camera, and the first additional content additionally captured subsequently is inserted at an insert point on the main content and a control unit for controlling to be linked to, wherein the control unit controls the output of the first additional content in response to a second user input for selecting an insert point on the main content in a state in which the main content is output and an insert point on the main content may be set to be the same as the selected first position.
  • the control unit selects a second location of the live area while the display outputs the live area for shooting or recording the first additional content in response to a third user input to perform shooting or recording of the first additional content through the camera, and subsequently capturing or recording second additional content to be linked to an insert point on the first additional content and the control unit, in a state in which the first additional content is output, controls to output the second additional content in response to a third user input for selecting an insert point on the first additional content, and
  • the position of the insert point on the first additional content may be set to be the same as the selected second position.
  • the camera includes a first camera unit for wide-angle photographing or recording and a second camera unit for enlarged photographing or recording, and in response to the first user input, recording of the main content and capturing or recording the first additional content, wherein the recording of the main content is performed through any one of the first camera unit and the second camera unit, and the photographing or recording of the first additional content is , can be done through the other one.
  • the first user input is to touch one of at least one candidate, and the at least one candidate is set based on an insert point set in a second main content different from the main content.
  • the first additional content may be an enlarged photographing or recording of a focused portion on the live area.
  • the terminal of the present disclosure in response to a second user input for selecting a plurality of locations of the live area, recording of the main content is performed through the camera, and the first additional content additionally captured or recorded subsequently is a control unit for controlling to be linked to an insert point on the main content, wherein the control unit selects an insert point associated with the first additional content on the main content in a state in which the main content is output
  • the first additional content may be controlled to be output, and the position of the insert point associated with the first additional content on the main content may be set to be the same as any one of the plurality of positions.
  • any one of the plurality of first locations may have the highest matching rate with the first additional content among things corresponding to each of the plurality of first locations.
  • the first user input is a touch input for dragging in one direction on the live area
  • a location first touched by the touch input is selected as the first location
  • the drag direction of the touch input is selected as the first location.
  • the first additional content may be set as a still picture type
  • the first additional content may be set as a moving picture type.
  • the terminal control method and the recording medium recording the terminal control method of the present disclosure may include: a main content capturing step of capturing main content in response to a first user input for selecting a first location of a live area of a display; an insert point setting step of setting an insert point at the same location as the selected first location on the captured main content; and an associating step of linking the subsequently additionally photographed or recorded first additional content to the insert point, wherein the live area is an area generated by processing data inputted from a camera in real time, and on the main content
  • the insert point may be characterized in that the first additional content is output in response to a second user input for selecting an insert point on the main content in a state in which the main content is output.
  • shooting/recording of main content and shooting/recording of additional content can be simultaneously performed in the shooting/recording step, so that, depending on the shooting/recording environment, whether to shoot/record a plurality of contents separately or simultaneously /You can adaptively select whether to record or not.
  • FIG. 1 is a block diagram of a terminal according to the present disclosure.
  • FIG. 2 is a diagram illustrating an example of an image type content capturing screen.
  • FIG. 3 is a diagram illustrating an example of a video-type content recording screen.
  • FIG. 4 is a diagram illustrating a process of associating pre-stored additional content based on an insert point to pre-stored main content.
  • 5 and 6 are diagrams illustrating an output form of main content stored in an image type.
  • FIG 7 illustrates an output form of additional content associated with the main stored content of the image type.
  • FIGS. 8 and 9 are diagrams illustrating an output format of main content stored in a moving picture type.
  • FIG 10 illustrates an output form of additional content associated with the main stored content of the moving picture type.
  • FIG. 11 is a diagram illustrating a method of simultaneously capturing an image type of main content and setting an insert point on the main content.
  • FIG. 12 is a diagram illustrating an example of a user input for capturing main content.
  • FIG. 13 is a diagram illustrating an example of setting any one of at least one candidate as an insert point.
  • FIG. 14 illustrates an embodiment in which a plurality of insert points are set by an input of simultaneously touching a plurality of points.
  • 15 illustrates an embodiment in which a plurality of insert points are set by an input of sequentially touching a plurality of points.
  • 16 illustrates an example of a user input for capturing additional content of an image type.
  • 17 illustrates an example of a user input for recording additional content of a video type.
  • FIG. 18 is a diagram illustrating a method of concurrently performing video-type main content recording and inserting point setting on the main content.
  • 19 is a diagram illustrating an example of a user input for recording main content.
  • 20 is a diagram illustrating an example of setting any one of at least one candidate as an insert point.
  • 21 illustrates an embodiment in which a plurality of insert points are set by an input of simultaneously touching a plurality of points.
  • FIG. 22 illustrates an embodiment in which a plurality of insert points are set by an input of sequentially touching a plurality of points.
  • a component when it is said that a component is “connected”, “coupled” or “connected” with another component, it is not only a direct connection relationship, but also an indirect connection relationship in which another component exists in the middle. may also include.
  • a component when a component is said to "include” or “have” another component, it means that another component may be further included without excluding other components unless otherwise stated. .
  • first, second, etc. are used only for the purpose of distinguishing one component from other components, and do not limit the order or importance between the components unless otherwise specified. Accordingly, within the scope of the present disclosure, a first component in one embodiment may be referred to as a second component in another embodiment, and similarly, a second component in one embodiment is referred to as a first component in another embodiment. can also be called
  • the components that are distinguished from each other are for clearly explaining each characteristic, and the components do not necessarily mean that the components are separated. That is, a plurality of components may be integrated to form one hardware or software unit, or one component may be distributed to form a plurality of hardware or software units. Accordingly, even if not specifically mentioned, such integrated or dispersed embodiments are also included in the scope of the present disclosure.
  • components described in various embodiments do not necessarily mean essential components, and some may be optional components. Accordingly, an embodiment composed of a subset of components described in one embodiment is also included in the scope of the present disclosure. In addition, embodiments including other components in addition to components described in various embodiments are also included in the scope of the present disclosure.
  • the content of the present disclosure may include not only text-based content, but also multimedia content such as images, videos, and music.
  • Content described in the present disclosure is an image (eg, a file having an extension such as jpg, gif, tif, or bmp) or a video (eg, a file having an extension such as mpeg, avi, mp4, wmv or mov) It may include multimedia content, such as, and will be described based on this.
  • the content includes a presentation document (eg, a file with an extension of ppt(x)) other than multimedia content, a spreadsheet document (eg, a file with an extension of xsl(x)), a word processing document (eg, For example, a file with an extension of doc(x), hwp, txt or pdf), a website (eg, a file with an extension of html), or an e-book (eg, a file with an extension such as epub), etc. of text-based content.
  • a presentation document eg, a file with an extension of ppt(x)
  • a spreadsheet document eg, a file with an extension of xsl(x)
  • a word processing document eg, For example, a file with an extension of doc(x), hwp, txt or pdf
  • a website eg, a file with an extension of html
  • an e-book eg, a file with an
  • the contents interconnected by the insert point may be divided into main content and additional content.
  • the additional content of the present disclosure may be content linked to an insert point included in the main content and output in response to a user input to the insert point.
  • the coordinates of the insert point associated with the main content and the additional content may be stored in association.
  • the coordinates of the insert point and link information of the additional content associated with the insert point may be included in the metadata of the main content.
  • the main content and the additional content may be merged into one file and stored, and the header of the insert point and link information of the additional content may be included in the merged file header.
  • the main content of the present disclosure may mean content including at least one insert point.
  • the first addition The content may perform the role of the main content. That is, the additional content may also be the main content in relation to other additional content.
  • FIG. 1 is a block diagram of a terminal according to the present disclosure.
  • the terminal described in the present disclosure may be a mobile terminal such as a smart phone, a tablet PC (Personal Computer), a laptop, or a PDA (Personal Digital Assistants), or a fixed terminal such as a PC (Personal Computer) or a smart TV.
  • a mobile terminal such as a smart phone, a tablet PC (Personal Computer), a laptop, or a PDA (Personal Digital Assistants), or a fixed terminal such as a PC (Personal Computer) or a smart TV.
  • a mobile terminal such as a smart phone, a tablet PC (Personal Computer), a laptop, or a PDA (Personal Digital Assistants), or a fixed terminal such as a PC (Personal Computer) or a smart TV.
  • PC Personal Computer
  • a terminal may include a communication unit 110 , a camera 120 , a user input unit 130 , a display 140 , a memory 150 , and a control unit 160 .
  • the communication unit 110 allows the terminal to communicate with other terminals.
  • the communication unit 110 may perform communication in a wireless manner or may perform communication in a wired manner.
  • the communication unit 110 may include at least one of a mobile communication module and a wireless Internet module.
  • the mobile communication module is for performing communication through a mobile communication base station such as LTE, HSDPA or CDMA
  • the wireless Internet module is for performing communication through a wireless LAN (Wi-Fi).
  • the wired method may include LAN, USB, HDMI, RGB or DVI.
  • the camera 120 may perform a process of photographing a subject based on input infrared light or visible light, converting it into an electric signal, and storing the image or video file format.
  • the camera 120 includes at least one camera unit, and each camera unit may capture or record content.
  • the camera unit may include a camera unit for wide-angle photographing/recording and a camera unit for enlarged photographing/recording.
  • the user input unit 130 may receive a user input.
  • the user input unit 130 includes a button input unit that receives a button input in the form of a button exposed on the outside of the terminal, a touch input unit that receives a touch input that touches the display 140 , and a motion sensor that monitors movement. It may include at least one of an input unit or an audio input unit that receives an audio input (eg, a voice input) from an audio recognition sensor.
  • the touch input unit may include at least one touch sensor for receiving a touch input.
  • a touch input when the display 140 and the touch input unit form a mutually layered structure, such a structure may be referred to as a 'display'.
  • various types of touch inputs such as selecting or dragging an object output on the display using a pointer may be received.
  • the motion input unit may include at least one or more motion recognition sensors for receiving the motion input. Through the motion sensor, various motion inputs specified by at least one of a type of motion, a position, a vector, a motion input time, or the number of motion inputs may be received. As another example, the controller 160 may analyze a user's motion through an image input through the camera 120 , and receive a motion input based on a result of the analysis.
  • the motion input unit may include at least one gyro sensor or an acceleration sensor for receiving a motion input of the terminal itself.
  • Various motion inputs specified by at least one of a position of the terminal, a motion vector, a motion input time, and a motion type may be received through the gyro sensor or the acceleration sensor.
  • the motion input unit may include at least one or more biosensors for receiving the eye motion input. Through the biosensor, various motion inputs specified by at least one of a user's eye position, a motion type, a motion vector, and a motion input time may be received.
  • the display 140 outputs information processed by the terminal.
  • the display 140 serves to output an execution screen of an application driven in the terminal, a user interface on the execution screen, or a graphic user interface.
  • the execution screen may include a screen to be photographed or photographed by the camera.
  • the memory 150 stores data for application execution and data processed in the terminal.
  • the memory 150 is a hard disk, a solid state disk (SSD), a flash memory 150 (Flash Memory), a card-type storage device (eg, SD or XD memory 150, etc.), RAM (Random Access Memory) ) or ROM (Read Only Memory) may include at least one storage medium. Web storage that can be accessed remotely through the communication unit 110 may also be included in the category of the memory 150 .
  • the controller 160 controls the overall operation of the terminal.
  • the control unit 160 may process a signal, data, or information input or output through components constituting the terminal.
  • the controller 160 may execute an application stored in the memory 150 .
  • the controller 160 may include an operation/control device such as a central processing unit (CPU), a graphic processing unit (GPU), a micro controller unit (MCU), or a micro processing unit (MPU).
  • CPU central processing unit
  • GPU graphic processing unit
  • MCU micro controller unit
  • MPU micro processing unit
  • the terminal does not have to include all the components shown in FIG. 1 , and some of the components shown in FIG. 1 may not be included depending on the implementation form.
  • the present disclosure will be described in detail based on the above description.
  • the user input of the present disclosure may be at least one of a touch input, a button input, a motion input, and an audio input. That is, the input of the present disclosure may also include a combination of different types of inputs (touch input, button input, motion input, audio input, etc.).
  • the touch input may be an input received by a touch sensor of the touch input unit of the terminal.
  • the touch input may be specified by at least one of a touch position, a touch pattern, a touch intensity (touch pressure), a touch time, or the number of touches.
  • the button input may be an input received by a button input unit of the terminal.
  • the button input may be specified by at least one of a button type, the number of buttons, a button pattern, an input strength (input pressure), an input time, or the number of times of input.
  • the motion input may be an input received by a motion sensor of a motion input unit of the terminal.
  • the motion input may be specified by at least one of a motion position, a motion recognition shape, a motion vector, a motion input time, or the number of motion inputs.
  • the terminal itself can be determined by at least one of the location of the terminal, the shape of the motion recognition of the terminal, the motion vector of the terminal, the time of inputting the motion of the terminal, the number of times of inputting the motion of the terminal, and the like.
  • a motion input may be specified.
  • the eye motion input may be specified by at least one of a position of an eye motion, an eye motion recognition shape, an eye motion vector, an eye motion input time, or the number of eye motion input. there is.
  • the audio input may be an input received by an audio recognition sensor of an audio input unit of the terminal.
  • the audio input may be specified by at least one of amplitude, waveform, period, or audio content of an audio frequency, and the like.
  • the description is mainly based on the touch input, but the description may be applied to other user inputs (button input, motion input, audio input, etc.) other than the touch input.
  • FIG. 2 is a diagram illustrating an example of an image type content capturing screen.
  • the content capturing screen may be a screen output to a display for capturing an image when the content type is an image.
  • the content capturing screen may include at least one of a live area, a preview area, an execution area of a shooting command, and an area related to shooting settings.
  • 2(a) is an example of a content shooting screen, and a preview area 201, a live area 202, an execution area 203 for shooting commands, and an area 204 for shooting settings are output.
  • 2(b) is an example of another content shooting screen, in which a preview area 211, a live area 212, a shooting command execution area 213, and a shooting setting area 214 are output. shows what to do
  • the live area may be an area for outputting an image based on data input to the camera in real time.
  • the live area may be output to an area 202 separate from the preview area 201 as shown in FIG. 2A , or may be output in a pop-up window format 212 on the preview area 211 as shown in FIG. 2B .
  • the pop-up window may be moved by a touch input and may be sized.
  • the preview area may be an area in which the content to be captured and the content linked based on the insert point are output. Also, in the preview area, a mark (a dot, a number, etc.) indicating an insertion point of the content together with the content may be output. The indication of the insert point may be output at the position of the corresponding insert point. In the example of (a) (b) of FIG. 2 , the content associated with the content to be photographed is output together with the dot indicating the insert point in the preview areas 201 and 211 .
  • a photographing command set in the execution area of the photographing command may be executed.
  • a photographing command set in the execution area of the photographing command may be executed, and image photographing may be performed through the camera.
  • a button shape for executing an image capturing command is output to the image capturing command execution areas 203 and 213 .
  • a command eg, brightness, contrast, saturation, focal length change command
  • a command eg, brightness, contrast, saturation, focal length change command
  • FIG. 3 is a diagram illustrating an example of a video-type content recording screen.
  • the content recording screen may be a screen output to a display for recording a video when the content type is a video.
  • the content recording screen may include at least one of a live area, a preview area, a recording-related command execution area, and a recording setting area.
  • 3(a) is an example of a content recording screen, showing a preview area 301, a live area 302, an execution area 303 for recording-related commands, and an area 304 for recording settings.
  • 3(b) is an example of another content recording screen, a preview area 311, a live area 312, a recording-related command execution area 313, and a recording setting area 314. is shown to output
  • the live area may be an area for outputting an image based on data input to the camera in real time.
  • the live area may be output to an area 302 separate from the preview area 301 as shown in FIG. 3A , or may be output in the form of a pop-up window 312 on the preview area 311 as shown in FIG. 3B .
  • the pop-up window may be moved by a touch input and may be sized.
  • the preview area may be an area in which the content to be captured and the content linked based on the insert point are output. Also, in the preview area, a mark (a dot, a number, etc.) indicating an insertion point of the content together with the content may be output. The indication of the insert point may be output at the position of the corresponding insert point. In the example of (a) (b) of FIG. 3 , the content associated with the content to be photographed is output together with the dot indicating the insert point in the preview areas 301 and 311 .
  • the recording-related command (recording start command, recording pause command, recording resume command, or recording end command) set in the execution area of the recording-related command etc.) can be executed.
  • the recording-related command set in the execution area of the recording-related command is executed, and the start/pause/resume/end of video recording through the camera is performed.
  • a button shape for executing a video recording start command is output to the execution areas 303 and 313 of the recording related command.
  • a command eg, brightness, contrast, saturation, focal length change command
  • the recording settings may be changed through a touch input in the recording setting area.
  • FIG. 4 is a diagram illustrating a process of associating pre-stored additional content based on an insert point to pre-stored main content.
  • the controller 160 may set an insert point of the pre-stored main content through editing of the pre-stored main content, and link the pre-stored additional content to the pre-stored main content based on the set insert point. .
  • the controller 160 may output the pre-stored main content to the display for editing of the pre-stored main content. While the pre-stored main content for editing the pre-stored main content is being output to the display, in response to a user input for setting the insert point, the controller 160 executes an insert point setting command to set the insert point.
  • the insert point may be a link between the pre-stored main content and the pre-stored additional content.
  • control unit 160 in response to a user input of selecting pre-stored additional content to be linked to the pre-stored main content based on the insert point, the user
  • the additional content selected by input may be associated with the pre-stored main content based on the insert point.
  • the insert point can be set through the touch input 402 for setting the insert point, and at the set insert point
  • the touch input 404 for selecting the pre-stored additional content 403 to be associated the main content and the selected additional content may be associated.
  • 5 and 6 are diagrams illustrating an output form of main content stored in an image type.
  • the controller 160 may output a mark (a dot, a number, etc.) indicating an insert point on the main content together with the main content.
  • the insertion point of the main content may be displayed as a dot 501 on the main content and output together with the main content.
  • each insert point may be displayed as a dot on the main content and output together with the main content. That is, a plurality of points may be output on the main content to identify each insert point.
  • the controller 160 may output additional content associated with the insert point indicated by the corresponding dot in response to a touch input to the dot indicating the inserting point on the output main content.
  • the insertion point of the main content may be displayed as numbers 502 and 503 on the main content and output together with the main content.
  • each insert point may be displayed as a number on the main content and output together with the main content. That is, a plurality of numbers may be output on the main content to identify each insert point.
  • the number indicating the insert point may be determined by a user input or may be automatically determined according to an order in which the insert point is set. As an example of the latter case, the first insert point initially set on the main content may be displayed 502 on the main content as the number '1'.
  • the second insert point set next to the first insert point may be displayed 503 on the main content as the number '2'.
  • the controller 160 may output additional content associated with the insert point of the corresponding number.
  • the number of insert points of the main content may be output together with the main content.
  • the number of insert points '4' of the main content is output at the lower right corner of the screen together with the main content.
  • the controller 160 may output the number of insert points on the screen in response to a touch input for the number of insert points.
  • Figure 6 (b) in response to the touch input 601 for the part '4' indicating the number of insert points, numbers 1, 2, 3, and 4 of the insert point of the main content are output at the bottom of the screen, and there is.
  • the controller 160 may display the insert point of each number as a dot in response to the output number of the insert point, or may output additional content associated with the insert point of each number.
  • FIG. 6C in response to the touch input 602 for the number '2' of the insert point, additional content 603 associated with the insert point corresponding to the number '2' of the insert point is output. .
  • FIG 7 illustrates an output form of additional content associated with the main stored content of the image type.
  • the form in which the additional content is output includes output to the entire screen of the display (FIG. 7(a)), output to a partial area (FIG. 7(b)), or output to a pop-up window (FIG. 7(c)), etc. may include
  • the additional content When the additional content is output to the partial region, it may indicate that the additional content is output to the second region while the output of the main content is maintained in the first region of the display.
  • the first area and the second area may partially overlap.
  • the additional content When the additional content is output to the pop-up window, it may indicate that the additional content is output to the pop-up window area that is overlaid on the main content while the main content is maintained on the entire screen or a partial area of the display.
  • the pop-up window may be moved by a touch input and may be sized.
  • FIGS. 8 and 9 are diagrams illustrating an output format of main content stored in a moving picture type.
  • the controller 160 When outputting the main content to the display 140 , the controller 160 displays a display (a dot, a number, etc.) indicating an insertion point of the main content together with the main content during the running time of the main content. It can be printed during the valid time of the point.
  • the insert point of the main content may be displayed as a dot 801 on the main content during the effective time of the corresponding insert point during the running time of the main content, and may be output together with the main content.
  • each insert point may be displayed as a dot on the main content during the effective time of each insert point, and may be output together with the main content. That is, a plurality of points may be output on the main content to identify each insert point.
  • the controller 160 may output additional content associated with the insert point indicated by the corresponding dot in response to a touch input to the dot indicating the inserting point on the output main content.
  • the insert point of the main content is displayed as numbers 802 and 803 on the main content during the effective time of the corresponding insert point during the running time of the main content, and can be output together with the main content.
  • each insert point may be displayed as a number on the main content during the effective time of each insert point, and may be output together with the main content. That is, a plurality of numbers may be output on the main content to identify each insert point.
  • the number indicating the insert point may be determined by a user input or may be automatically determined according to an order in which the insert point is set.
  • the first insert point initially set on the main content may be displayed 802 on the main content as the number '1'.
  • the second insert point set next to the first insert point may be displayed 803 on the main content as the number '2'.
  • the controller 160 may output additional content associated with the insert point of the corresponding number.
  • the number of insert points of the main content may be output together with the main content.
  • the number of insert points '4' of the main content is output at the lower right of the screen together with the main content.
  • the controller 160 may output the number of insert points on the screen in response to a touch input for the number of insert points.
  • Figure 9 (b) in response to the touch input 901 for the part '4' indicating the number of insert points, numbers 1, 2, 3, and 4 of the insert point of the main content are output at the bottom of the screen, and there is.
  • the controller 160 may display the insert point of each number as a dot in response to the output number of the insert point, or may output additional content associated with the insert point of each number.
  • FIG. 9C in response to the touch input 902 for the number '2' of the insert point, additional content 903 associated with the insert point corresponding to the number '2' of the insert point is output. .
  • the number of insert points of the main content output together with the main content may change according to a running time at which the main content is output. For example, when the effective time of the first insert point is '00:05 ⁇ 00:10' and the effective time of the second insert point is '00:07 ⁇ 00:10', '00' in the running time of the main content During the time of :00 ⁇ 00:05', the number of insert points '0' is output, and during the time of '00:05 ⁇ 00:07' during the running time of the main content, the number of insert points '1' is output and, during the running time of the main content, '00:07 to 00:10', the number of insert points '2' may be output.
  • the additional content in response to a user input for outputting additional content, when the end point of the first section arrives while the additional content is output in the first section, the additional content is not output until just before the start time of the second section can In this case, from the start time of the second section to the end time of the second section, the additional content may be output again without a user input for separately outputting the additional content.
  • FIG 10 illustrates an output form of additional content associated with the main stored content of the moving picture type.
  • the form in which the additional content is output includes output to the entire screen of the display (FIG. 10(a)), output to a partial area (FIG. 10(b), or output to a pop-up window (FIG. 10(c)), etc. can do.
  • the additional content When the additional content is output to the partial region, it may indicate that the additional content is output to the second region while the output of the main content is maintained in the first region of the display.
  • the first area and the second area may partially overlap.
  • the additional content When the additional content is output to the pop-up window, it may indicate that the additional content is output to the pop-up window area that is overlaid on the main content while the main content is maintained on the entire screen or a partial area of the display.
  • the pop-up window may be moved by a touch input and may be sized.
  • the output of the additional content may be output only during the effective time of the associated insertion point of the additional content during the running time of the main content. Accordingly, while the additional content is being output, when the effective time of the insert point associated with the additional content is expired, the output of the additional content may be terminated.
  • the embodiment of linking the pre-stored additional content to the previously-stored main content is, after shooting/recording and storage of the main content and shooting/recording and storage of the additional content are separately performed, linking the main content and the additional content I had to do an edit. That is, since the shooting/recording, storage, and editing of the content are performed separately, the efficiency is reduced in terms of time and cost, and there is a cumbersome aspect.
  • the content may be at least one of main content and additional content.
  • FIG. 11 is a diagram illustrating a method of simultaneously capturing an image type of main content and setting an insert point on the main content.
  • a method of simultaneously capturing an image type of main content and setting an insert point on the main content includes: capturing the main content (S1101); setting an insert point (S1102); shooting/recording additional content (S1103); or associating the main content with the additional content (S1104); may include at least one of
  • the step of photographing the main content may include, when there is a user input for photographing the main content, a command to photograph the main content is executed to perform the photographing of the main content.
  • a user input for capturing the main content may be distinguished from a user input for setting an insert point, which will be described later.
  • the touch input for setting the insert point and the touch for capturing the main content are touch inputs to the live area of the content capturing screen for capturing the main content
  • the touch input for setting the insert point and the touch for capturing the main content may be different in at least one of a touch position, a touch pattern, a touch intensity (touch pressure), a touch time, or the number of touches.
  • the touch input for setting the insert point may be a long touch input input at the same location for more than a preset time, and the touch input for the main content shooting command is input for less than a preset time at the same location It may be a short touch input.
  • the touch input for setting the insert point may be the short touch input, and the touch input for capturing the main content may be the long touch input.
  • An insert point setting command which will be described later, may be executed by a user input for capturing the main content. That is, even if there is no user input for setting a separate insert point, both the main content capturing command and the inserting point setting command may be executed by one user input for capturing the main content. For example, when a touch input is applied to a live area of a content capturing screen for main content capturing, a main content capturing command and an inserting point setting command may be executed. As another example, when a touch input is applied to the execution region of the shooting command while the focused portion of the camera is output on the live region, a main content shooting command and an insert point setting command may be executed.
  • the step of setting the insert point may include, when there is a user input for setting the insert point, a command to set the insert point is executed and the insert point is set based on the user input.
  • the step of shooting/recording the additional content may include executing an additional content shooting command or at least one additional content recording related command when there is a user input for shooting/recording the additional content.
  • the step of associating the main content and the additional content may include associating the captured main content with the subsequent captured or recorded additional content based on a set insert point on the main content.
  • the control unit 160 may execute a command for capturing the main content to perform main content capturing.
  • the controller 160 may execute an insert point setting command to set the insert point based on the user input.
  • the controller 160 may execute an additional content shooting command or at least one additional content recording related command to perform additional content shooting/recording.
  • the controller 160 may link the additional content based on an insert point set in the main content.
  • FIG. 12 is a diagram illustrating an example of a user input for capturing main content.
  • a content capturing screen for capturing the main content may be output on the display.
  • the controller 160 may execute a main content capturing command to perform main content capturing.
  • the user input for capturing the main content includes a touch input to the execution area of the capturing command of the content capturing screen for capturing the main content (Fig. 12(a)) or a touch input to the live area ( 12(b)) may be included.
  • the user input 1202 for setting the insert point This may be required separately.
  • insert point setting command by the user input 1203 for main content shooting can be executed, so there may be no need for user input for setting a separate insert point.
  • the controller 160 may focus the camera on a portion corresponding to the corresponding position.
  • the screen output to the display may output a main content recording screen, an additional content capturing/recording screen, or a separate main content or additional content screen.
  • the separate main content or additional content screen may include a case in which the main content or additional content is output to the entire screen of the display, output to a partial area, or output to a pop-up window.
  • the setting of the insert point of the main content may be performed by specifying the position of the corresponding insert point.
  • setting of each insert point may be performed by specifying the position of each insert point.
  • the position of the insert point of the main content may be specified by one coordinate.
  • the user input for setting the insert point may be a touch input on the live area of the content capturing screen for capturing the main content.
  • the insert point may be set based on a touch input of the live area.
  • the insert point may be set to be the same as the touch input coordinates.
  • the insert point may be set at all coordinates within a range having a predetermined shape based on the coordinates input by the touch.
  • the size of the predetermined shape may be pre-set or may be adaptively determined in consideration of a display screen ratio, an output ratio of content, and the like.
  • a user input for setting an insert point may be distinguished from a user input for capturing the main content. For example, when both the user input for setting the insert point and the user input for shooting the main content are performed as a touch input to the live area of the content shooting screen for shooting the main content, the touch input for setting the insert point and the main content shooting are performed.
  • the touch input at least one of a touch position, a touch pattern, a touch intensity (touch pressure), a touch time, or the number of touches may be different. As described above, this will be omitted.
  • a command for capturing a main content may be executed. That is, even if there is no separate user input for capturing the main content, both the insert point setting command and the main content capturing command may be executed by one user input for setting the insert point. For example, when there is a touch input in the live area of the content capturing screen for main content capturing, a main content capturing command is executed to perform main content capturing, and an insert point setting command is executed to perform live recording of the captured main content.
  • the insert point may be set to be the same as the coordinates that are touch-inputted into the area.
  • an additional content shooting command or at least one additional content recording related command may be executed. That is, even if there is no user input for additional content shooting/recording, by one user input for setting the insert point, an insert point setting command and additional content shooting or at least one additional content recording related command (eg, start recording) command and recording end command) can all be executed.
  • an insert point setting command and additional content shooting or at least one additional content recording related command eg, start recording
  • recording end command e.g, start recording
  • an additional content shooting command or at least one additional content recording related command may be executed along with the main content shooting command by a user input for setting the insert point.
  • a main content capturing command is executed to perform main content capturing
  • an insert point setting command is executed to display the captured main content live.
  • the insert point is set to be the same as the coordinates input by the touch in the area, and an additional content shooting command or at least one additional content recording related command is executed, so that additional content shooting/recording for the coordinates with the touch input may be performed.
  • the additional content shooting/recording includes capturing/recording by enlarging a portion to which a touch input is applied, and the magnification ratio may be a preset value or a value adaptively determined by the size of an object.
  • the main content shooting and the additional content shooting/recording may be sequentially performed by one camera unit.
  • photographing of main content and photographing/recording of additional content may be performed by a plurality of camera units.
  • the first camera unit among the plurality of camera units may photograph the main content
  • the second camera unit may photograph/record the additional content.
  • the first camera unit may be any one of a camera unit for wide-angle photographing/recording or a camera unit for enlarged photographing/recording
  • the second camera unit may be the other one.
  • the controller 160 sets the insert point based on the first touched position, and selects additional content based on the dragged direction. You can set the type. Specifically, if the drag direction of the touch input is the first direction, the first additional content may be set as a still picture type, and if the drag direction of the touch input is the second direction, the first additional content may be set as a moving picture type.
  • the controller 160 may focus the camera on a portion corresponding to the corresponding position.
  • FIG. 13 is a diagram illustrating an example of setting any one of at least one candidate as an insert point.
  • the user input for setting the insert point may be a touch input 1314 of any one of at least one candidate.
  • at least one candidate may be output to the live area, and when there is a touch input in any one of the output at least one candidate, the touch input candidate may be set as an insert point.
  • At least one candidate is set based on an insert point set in main content (hereinafter, second main content) different from main content (hereinafter, first main content) in which an insert point is set.
  • second main content main content
  • first main content main content
  • coordinates corresponding to the coordinates of the insert point of the second main content may be output to the live area as a candidate for the insert point of the first main content.
  • the first insert point candidate 1311 and the second insert point 1302 are in the coordinates of the first main content corresponding to the coordinates of the first insert point 1301 of the second main content.
  • the second insert point candidate 1312 is outputted to the coordinates of the first main content corresponding to the coordinates
  • the third insert point candidate 1313 is output to the coordinates of the first main content corresponding to the coordinates of the third insert point 1303 can be
  • the corresponding coordinates may be coordinates of the same position as the coordinates of the insertion point of the second main content.
  • the corresponding coordinates may be coordinates in which the coordinates of the insert point of the second main content are changed according to at least one of the sizes or ratios. there is.
  • the coordinates of the same or similar thing in the first main content are the first main content's insert point. Candidates may be output to the live area.
  • the similar object may include a similar object in at least one of color, size, type, shape, or angle. For example, if only the viewing angle is changed, the same object, an object only having a different color, or an object having a different size may be included.
  • At least one candidate may be set based on another insert point of the main content in which the insert point is set. For example, coordinates of an object that is the same as or similar to an object located at another insert point of the main content may be output to the live area as a candidate for the insert point of the main content.
  • coordinates of an object that is the same as or similar to an object located at another insert point of the main content may be output to the live area as a candidate for the insert point of the main content.
  • a plurality of insert points may be set in the main content.
  • each of the plurality of insert points may be associated with additional content.
  • the additional contents associated with the plurality of insert points may be different from or the same as each other. In the latter case, it may be associated with a plurality of insert points and one additional content.
  • a plurality of additional content may be associated with one insert point.
  • FIG. 14 illustrates an embodiment in which a plurality of insert points are set by an input of simultaneously touching a plurality of points.
  • the plurality of inserts point when the user input for setting the insert point is an input ( 1401 , 1402 ) for simultaneously touching a plurality of points on the live area of the content capturing screen for capturing the main content, the plurality of inserts point can be set.
  • the plurality of insert points may be set based on the plurality of points touched on the live area.
  • the first insert point may be set to be the same as the coordinates of the first point among the plurality of points
  • the second insert point may be set to be the same as the coordinates of the second point.
  • the main content shooting command and the insert point setting command are executed by the user input, so that the main content shooting and the captured main A plurality of insert points may be set in the content.
  • a main content capturing command and a plurality of inserting point setting commands may be executed by the user input.
  • the first additional content capturing may be performed by the input 1401 touching a first point among the plurality of points
  • the second additional content capturing may be performed by the input 1402 touching the second point.
  • the user input for setting the insert point may include an input of simultaneously touching a plurality of candidates.
  • a candidate selected by an input of touching a first point among the plurality of points may be set as the first insert point
  • a candidate selected by an input of touching a second point may be set as the second insert point.
  • the insert points may have an order.
  • the order may be set manually or automatically.
  • An example of manually setting may include arbitrarily setting an order by a user.
  • Examples of automatically setting may include automatically setting in the order of being close in distance based on at least one of upper, lower, left, and right.
  • 15 illustrates an embodiment in which a plurality of insert points are set by an input of sequentially touching a plurality of points.
  • the plurality of You can set the insert point of
  • the plurality of insert points may be set based on the plurality of points touched on the live area.
  • the first insert point may be set to be the same as the coordinates of the first point among the plurality of points
  • the second insert point may be set to be the same as the coordinates of the second point.
  • setting of the first insert point and capturing the main content may be performed at the same location as the coordinates of the first point by the touch input 1501 of the first point for setting the insert point. Thereafter, the second insert point at the same position as the coordinates of the second point may be performed by the touch input 1502 of the second point for setting the insert point.
  • the main content shooting command and the insert point setting command are executed by the user input, so that the main content shooting and the captured main A plurality of insert points may be set in the content.
  • first additional content capturing may be performed by an input 1501 that touches a first point among a plurality of points
  • second additional content capturing may be performed by an input 1502 that touches a second point.
  • the user input for setting the insert point may include an input of continuously touching a plurality of candidates.
  • a candidate selected by an input of touching a first point among the plurality of points may be set as the first insert point
  • a candidate selected by an input of touching a second point may be set as the second insert point.
  • the insert points may have an order.
  • the order may be set manually or automatically.
  • An example of manually setting may include arbitrarily setting an order by a user.
  • Examples of automatically setting may include automatically setting in the order of being close in distance based on at least one of upper, lower, left, and right.
  • the order may include automatically setting a plurality of consecutive touch inputs in the order in which they are input.
  • a last touch input among inputs for continuously touching a plurality of points for setting an insert point may be distinguished from other touch inputs.
  • the last touch input may be different from other touch inputs in at least one of a touch position, a touch pattern, a touch intensity (touch pressure), a touch time, or the number of touches.
  • the last touch input may be a long touch input input for more than a pre-set time.
  • a touch input for setting the last number of insert points may be the last touch input.
  • insert point setting may be terminated. In this case, a content capturing screen for capturing additional content may be output to the display.
  • a number indicating the insert point may indicate the order.
  • the first set number of insert points may be indicated as '1'
  • the second set number of insert points may be indicated as '2'. That is, the number of insert points set to the k-th may be expressed as 'k'.
  • k may represent a natural number of 1 or more.
  • the screen output to the display may output a main content recording screen, an additional content shooting/recording screen, or a separate main content or additional content screen.
  • the separate main content or additional content screen may include a case in which the main content or additional content is output to the entire screen of the display, output to a partial area, or output to a pop-up window.
  • only one insert point may be set by the inputs 1501 and 1502 that sequentially touch a plurality of points.
  • the setting of the first point as the first insert point is canceled by the touch input 1502 of the second point for setting the insert point, and , the second point may be set as the first insert point.
  • 16 illustrates an example of a user input for capturing additional content of an image type.
  • the controller 160 may execute an additional content capturing command to perform additional content capturing.
  • the user input for capturing additional content includes an input for touching a live area of a content capturing screen for capturing additional content (FIG. 16(a)) or an input for touching an execution area of a shooting command (FIG. 16(b)). can do.
  • the user input for capturing additional content may include an input for touching a live area in a pop-up window of a content capturing screen for capturing additional content (FIG. 16(c)).
  • the additional content may be an enlarged photograph of a focused portion on the live area.
  • the command for capturing additional content may be executed by a user input for setting an insert point or a user input for capturing the main content.
  • an additional content capturing command is executed by a user input for setting an insert point
  • an additional content capturing command of a portion corresponding to the corresponding insert point may be executed.
  • the insert point setting input is performed on the content capturing screen for capturing the main content, a separate content capturing screen for additional content may not be output on the display.
  • additional content capturing associated with the plurality of insert points may be performed according to the order of the plurality of insert points. For example, after capturing of the additional content to be linked to the first insert point corresponding to the first order is performed, the additional content to be linked to the second insert point corresponding to the second order may be captured.
  • the shooting of the additional content is performed after the setting of the insert point is performed, but the setting of the insert point may be performed after the shooting of the additional content is performed first.
  • An insert point setting command for setting an insert point in the first additional content may be executed by a user input for capturing the additional content (the first additional content).
  • the insertion point of the first additional content may be linking the first additional content and the second additional content.
  • the first additional content may serve as the main content in relation to the second additional content.
  • 17 illustrates an example of a user input for recording additional content of a video type.
  • the controller 160 may execute at least one additional content recording related command (eg, a recording start command and a recording end command) to perform additional content recording.
  • the recording-related command may include an additional content recording start command, a recording pause command, a recording resume command, and a recording end command.
  • the user input for additional content recording includes an input for touching the live area of the content recording screen for additional content recording (FIG. 17(a)) or an input for touching the execution area of a recording-related command (FIG. 17(b)).
  • the user input for additional content recording may include an input for touching a live area in a pop-up window of a content recording screen for additional content recording (FIG. 17(c)).
  • the additional content may be an enlarged recording of a focused portion on the live area.
  • the command related to recording of the additional content may be executed by a user input for setting an insert point or a user input for recording the main content.
  • an additional content recording start command is executed by a user input for setting an insert point
  • an additional content recording start command of a portion corresponding to the corresponding insert point may be executed.
  • the insert point setting input is performed on the content capturing screen for capturing the main content, a separate content capturing screen for additional content may not be output on the display.
  • additional content recording associated with the plurality of insert points may be performed according to the order of the plurality of insert points. For example, after recording of the additional content to be linked to the first insert point corresponding to the first order is performed, the recording of the additional content to be linked to the second insert point corresponding to the second order may be performed.
  • the recording of the additional content is preferably performed after the setting of the insert point is performed, but the setting of the insert point may be performed after the recording of the additional content is performed first.
  • An insert point setting command for setting an insert point in the first additional content may be executed in response to a user input for recording the additional content (the first additional content).
  • the insertion point of the first additional content may be linking the first additional content and the second additional content.
  • the first additional content may serve as the main content in relation to the second additional content.
  • Associating the additional content based on the set insert point may include storing the additional content or link information of the additional content in the main content. Alternatively, it may include storing link information of the additional content separately from the main content. In this case, link information of the additional content may be obtained from the database based on key data of the main content.
  • the main content may be associated with at least one additional content.
  • any one of the plurality of insert points may be associated with an object corresponding to the one of the at least one additional content and the additional content having the highest matching rate.
  • the plurality of insert points may include those set to be the same as the coordinates of each position selected by simultaneous or sequential user input.
  • the matching rate may include a similarity rate between the additional content and the object of any one of the plurality of insert points.
  • the similarity rate may be determined in consideration of at least one of a color, an angle, a shape, a shape, a type, and a size.
  • the output form of the main content may include outputting the main content and an insertion point of the main content to the display.
  • additional content associated with the insert point with the input may be output. Specific details thereof have been described above, and thus will be omitted.
  • the controller 160 may set a second insert point for the main content and the first additional content linked by the first insert point.
  • the second insert point may be a link between the first additional content and the second additional content.
  • the first additional content may be main content
  • the second additional content may be additional content linked to the main content. Accordingly, the above-described content regarding the main content and the additional content may be directly applied to the first additional content (the role of the main content) and the second additional content (the role of the additional content).
  • the user input for setting the second insert point of the first additional content may include an input of touching a live area of the content capturing screen for capturing the first additional content or an input of touching an execution area of a shooting command.
  • a user input for setting the second insert point may be distinguished from a user input for shooting the first additional content.
  • a shooting command of the first additional content may be executed by a user input for setting the second insert point.
  • a second additional content shooting command or at least one second additional content recording related command may be executed by a user input for setting the second insert point.
  • a second additional content shooting command or at least one second additional content recording related command may be executed together with the first additional content shooting command by a user input for setting the second insert point.
  • the user input for setting the second insert point may include an input of touching any one of at least one candidate.
  • the first additional content may include a plurality of insert points, and the plurality of insert points may be set by simultaneously or sequentially touching a plurality of points in the live area.
  • FIG. 18 is a diagram illustrating a method of concurrently performing video-type main content recording and inserting point setting on the main content.
  • a method of concurrently performing video-type main content recording and setting of an insert point on the main content includes: recording the main content ( S1801 ); setting an insert point (S1802); shooting/recording additional content (S1803); or associating the main content with the additional content (S1804); may include at least one of
  • At least one recording-related command (recording start command, recording pause command, recording resume command, or recording end command, etc.) by at least one user input for recording the main content. This may include performing recording of the main content.
  • the recording of the main content may be performed by a recording start command and a recording end command. Specifically, a recording start command is executed to start recording of the main content, and a recording end command is executed while recording is in progress to end the recording of the main content, so that the recording of the main content can be performed.
  • a recording pause command and a recording resume command may be further included.
  • the recording pause command and the recording resume command may not be essential commands for main content recording, unlike the recording start command and the recording end command.
  • a user input for recording the main content may be distinguished from a user input for setting an insert point, which will be described later.
  • the touch input for setting the insert point and the touch for recording the main content are touch inputs to the live area of the content recording screen for recording the main content
  • the touch input for setting the insert point and the touch for recording the main content The input may be different in at least one of a touch position, a touch pattern, a touch intensity (touch pressure), a touch time, or the number of touches.
  • the touch input for setting the insert point may be a long touch input input at the same location for a preset time or longer, and the touch input for main content recording is a short input for less than a preset time at the same location. (short) may be a touch input.
  • the touch input for setting the insert point may be the short touch input
  • the touch input for recording the main content may be the long touch input.
  • An insert point setting command which will be described later, may be executed by a user input for recording the main content. That is, even if there is no user input for setting a separate insert point, at least one recording-related command and an insert point setting command may all be executed by the user input for recording the main content. For example, when a touch input is applied to a live area of a content recording screen for main content recording, at least one recording related command and an insert point setting command may be executed. As another example, when a touch input is applied to the execution region of the recording-related command while the focused portion of the camera is output on the live region, a main content recording-related command and an insert point setting command may be executed.
  • the step of setting the insert point may include, when there is a user input for setting the insert point, a command to set the insert point is executed and the insert point is set based on the user input.
  • the step of shooting/recording the additional content may include executing an additional content shooting command or at least one additional content recording related command when there is a user input for shooting/recording the additional content.
  • the step of associating the main content with the additional content may include associating the recorded main content with the subsequent shooting or recorded additional content based on a set insert point on the main content.
  • the controller 160 may execute at least one recording related command to perform main content recording.
  • the controller 160 may execute an insert point setting command to set the insert point based on the user input.
  • the controller 160 may execute an additional content shooting command or at least one additional content recording related command to perform additional content shooting/recording.
  • the controller 160 may link the additional content based on an insert point set in the main content.
  • 19 is a diagram illustrating an example of a user input for recording main content.
  • a content recording screen for recording the main content may be output on the display.
  • the control unit 160 in response to at least one user input for recording the main content, executes at least one main content recording related command (eg, a recording start command and a recording end command) to perform the main content recording. there is.
  • main content recording related command eg, a recording start command and a recording end command
  • the user input for main content recording includes touch input to the execution area of a recording related command of the content recording screen for main content recording (FIG. 19(a)) or touch input to the live area (FIG. 19(b)) may be included.
  • the user input for recording the main content is a touch input 1901 to the execution area of the recording-related command of the content recording screen for recording the main content (FIG. 19(a))
  • the user input 1902 for setting the insert point may be required separately.
  • insert point setting command by the user input 1903 for main content recording can be executed, so there may be no need for user input for setting a separate insert point.
  • the controller 160 may focus the camera on a portion corresponding to the corresponding position.
  • the recording of the main content may be performed by a plurality of user inputs for recording the main content.
  • the recording of the main content may be performed by a first user input for executing a recording start command and a second user input for executing a recording end command. That is, the recording of the main content is started by the first user, and then the recording of the main content is ended by the second user input, so that the recording of the main content can be performed.
  • the recording of the main content may be performed by one user input for recording the main content.
  • the recording of the main content may be performed by executing a recording start command and a recording end command by one user input. That is, by one user input, the recording of the main content is started, and then the recording of the main content is ended, so that the recording of the main content can be performed.
  • one user input for main content recording may be a long touch input.
  • a recording start command may be executed from the touch point to start recording of the main content
  • a recording end command may be executed when the touch is released to end the recording of the main content.
  • either one of the recording start command or the recording end command is performed by a user input, and the other is automatically executed according to a pre-set condition, so that the main content recording can be performed.
  • a recording start command is executed by one user input for recording the main content
  • the recording of the main content starts, and when a pre-set predetermined time elapses, the recording end command is automatically executed and may be ended.
  • the recording end command is automatically executed and may be ended.
  • recording of the main content may be performed.
  • the screen output to the display may output a main content recording screen, an additional content shooting/recording screen, or a separate main content or additional content screen.
  • the separate main content or additional content screen may include a case in which the main content or additional content is output to the entire screen of the display, output to a partial area, or output to a pop-up window.
  • the setting of the insert point of the main content may be performed by specifying an effective time and location of the corresponding insert point.
  • setting of each insert point may be performed by specifying an effective time and location of each insert point.
  • the position of the insert point of the main content may be specified by one coordinate.
  • the effective time may be specified in units of at least one of hours, minutes, and seconds.
  • the effective time of the insert point may be specified as a start time (or a frame corresponding to the start time) of the effective time and an end time (or a frame corresponding to the end time) of the effective time. there is.
  • the start point of the effective time is '6 seconds' and the end point is '16 seconds'
  • the effective time of the insert point is specified as '00:06 (6 seconds) ⁇ 00:16 (16 seconds)'.
  • the effective time of the insert point may be specified by a start time (or a frame corresponding to the start time) of the effective time and a duration (or the number of frames corresponding to the duration) of the effective time. For example, if the starting point of the effective time is '6 seconds' and the duration is '10 seconds', the effective time of the insert point is specified as '00:06 (6 seconds) to 00:16 (16 seconds)'. can be
  • the discontinuous effective time includes a plurality of consecutive effective times. Therefore, if each effective time is specified, the effective time of the discontinuous insert point can be specified. For example, when the effective time of a discontinuous insert point includes three effective times, if the first, second, and third effective times are specified as a start time and an end time (or a start time and a duration), respectively, the discontinuous It is possible to specify the effective time of a specific insert point. Alternatively, since the effective time of the discontinuous insert point includes a plurality of consecutive invalid times, it can be specified by specifying each invalid time.
  • the position of the insert point during the effective time of the insert point may be specified by dividing the effective time into at least one section and specifying the position of the insert point for each section.
  • the position of the insert point within each section may be fixed. In this case, the position of the insert point may be specified by one coordinate. The position of the insert point within each section may be changed. In this case, the position of the insert point may be specified in the following embodiment.
  • the position of the insert point in each section may be specified by the coordinates of the insert point at the start time (or the frame corresponding to the start time) of the section and the insert coordinates at the changed time point.
  • the position of the insert point can be specified by the coordinates of the insert point at the start time of the section (or the frame corresponding to the start time) and the coordinates of the insert point at each variable time point.
  • the position of the insert point within each section may be specified by a motion vector indicating the coordinates and motion of the insert point at the start time (or frame corresponding to the start time) of the section.
  • a motion vector indicating the coordinates and motion of the insert point at the start time (or frame corresponding to the start time) of the section.
  • the position of the insert point within the section has (x,y) as the starting position and has a speed of (motion vector size)/(duration of the section) for the duration of the section. It can be characterized as a constant movement in a direction.
  • the position of the insert point within each section is the coordinates of the insert point at the start time of the section (or the frame corresponding to the start time) and the end point of the section (or the frame corresponding to the end time) at the end point of the section. It can be specified by the coordinates of the insert point.
  • the position of the insert point within the corresponding section may be specified as moving at a constant speed with (x1, y1) as a starting position and (x2, y2) as an ending position.
  • the speed may be determined by (a straight line distance between the coordinates of the starting point and the coordinates of the ending point)/(duration of the section).
  • a user input for setting an insert point may be a touch input on a live area of a content recording screen for main content recording.
  • the insert point may be set based on a touch input of the live area.
  • the insert point may be set to be the same as the touch input coordinates.
  • the insert point may be set at all coordinates within a range having a predetermined shape based on the coordinates input by the touch.
  • the size of the predetermined shape may be pre-set or may be adaptively determined in consideration of a display screen ratio, an output ratio of content, and the like.
  • a user input for setting an insert point may be distinguished from a user input for recording main content. For example, when both the user input for setting the insert point and the user input for recording the main content are performed as a touch input to the live area of the content shooting screen for shooting the main content, the touch input for setting the insert point and the main content shooting are performed.
  • the touch input at least one of a touch position, a touch pattern, a touch intensity (touch pressure), a touch time, or the number of touches may be different. As described above, this will be omitted.
  • At least one recording related command may be executed by a user input for setting an insert point. That is, even if there is no separate user input for recording the main content, both the insert point setting command and at least one recording related command may be executed by one user input for setting the insert point.
  • the user input for setting the insert point may be a fixed short touch input.
  • the fixed short touch input may include input of the touch input at a fixed position for less than a preset time.
  • the insert point setting may be performed.
  • the effective time of the insert point may be specified to be the same as the time for a pre-set time from the point of the touch.
  • the effective time of the insert point may be specified from the time of the touch until the object at the position of the touch input disappears from the screen (frame out).
  • the position of the insert point may be specified by the same coordinates as the coordinates at which the touch input is applied.
  • the position of the insert point may be specified by tracking the position of the object at the position of the touch input. The tracking may be performed during the validity time of the insert point.
  • a recording start command and a recording end command are executed by the fixed short touch input, so that the main content can be recorded.
  • the recording start command is executed at the time of the touch to start recording the main content, and when a pre-set predetermined time elapses, the recording end command may be automatically executed and ended.
  • the user input for setting the insert point may be a fixed long touch input.
  • the fixed long touch input may include input of the touch input at a fixed position for a predetermined time or longer.
  • the position of the insert point and the effective time are specified, and the insert point setting can be performed.
  • the position of the insert point may be specified by the same coordinates as the coordinates of the touch input at the point of time of the touch.
  • the position of the insert point may be specified by tracking the position of the object at the position of the touch input.
  • the effective time of the insert point may be specified to be the same as the time from the point in time of the touch to the point in time when the touch is released.
  • the effective time of the insert point may be specified to be the same as the time for a pre-set time from the point of touch. Contrary to this, the effective time of the insert point may be specified from the time of the touch until the object at the position of the touch input disappears from the screen (frame out).
  • a recording start command and a recording end command are executed by the fixed long touch input, so that the main content can be recorded.
  • a recording start command may be executed at a point in time of the touch to start recording of the main content
  • a recording end command may be executed at a point in time where the touch is released to end recording of the main content.
  • the recording start command is executed at the time of the touch, so that the recording of the main content is executed, and when a pre-set predetermined time elapses, the recording end command is automatically executed and may be terminated.
  • the user input for setting the insert point may be a long drag touch input.
  • the long drag touch input may include input at a location where the touch input is not fixed for more than a preset time.
  • the insert point setting may be performed.
  • the position of the insert point may be specified by the same coordinates as the coordinates of the touch input at the point of time of the touch.
  • the coordinates of the touch input at the time of touch are the starting positions of the insert point
  • the coordinates of the touch input at the time of releasing the touch are the end positions of the insert point. It can be specified by a start position and an end position.
  • the position of the insert point the coordinates of the touch input at the point of time when the touch is made are the starting positions of the insert point, and the direction and speed of the touch input dragged from the start position are the direction and speed of the motion vector of the insert point.
  • the position of the insert point may be specified by tracking the position of the object at the position of the touch input.
  • the effective time of the insert point may be specified to be the same as the time from the point in time of the touch to the point in time when the touch is released.
  • the effective time of the insert point may be specified to be the same as the time for a pre-set time from the point of touch. Contrary to this, the effective time of the insert point may be specified from the time of the touch until the object at the position of the touch input disappears from the screen (frame out).
  • a recording start command and a recording end command are executed by the fixed long touch input, so that the main content can be recorded.
  • a recording start command may be executed at a point in time of the touch to start recording of the main content
  • a recording end command may be executed at a point in time where the touch is released to end recording of the main content.
  • the recording start command is executed at the time of the touch, so that the recording of the main content is executed, and when a pre-set predetermined time elapses, the recording end command is automatically executed and may be terminated.
  • an additional content shooting command or at least one additional content recording related command may be executed. That is, even if there is no user input for additional content shooting/recording, by one user input for setting the insert point, an insert point setting command and additional content shooting or at least one additional content recording related command (eg, start recording) command and recording end command) can all be executed.
  • an insert point setting command and additional content shooting or at least one additional content recording related command eg, start recording
  • recording end command e.g, start recording
  • an additional content recording command or at least one additional content recording related command may be executed along with at least one main content recording related command by a user input for setting an insert point.
  • at least one main content recording related command is executed to perform main content recording
  • an insert point setting command is executed to execute the captured
  • the insert point is set to be the same as the coordinates inputted by touch in the live area of the main content
  • an additional content shooting command or at least one additional content recording related command is executed, so that additional content shooting/recording for the coordinates with the touch input is performed. can be performed.
  • the additional content shooting/recording includes capturing/recording by enlarging a portion to which a touch input is applied, and the magnification ratio may be a preset value or a value adaptively determined by the size of an object.
  • main content recording and the additional content shooting/recording may be sequentially performed by one camera unit.
  • main content recording and additional content shooting/recording may be performed by a plurality of camera units.
  • the first camera unit among the plurality of camera units may record main content
  • the second camera unit may record/record additional content.
  • the first camera unit may be any one of a camera unit for wide-angle photographing/recording or a camera unit for enlarged photographing/recording
  • the second camera unit may be the other one.
  • the controller 160 sets the insert point based on the first touched position, and selects additional content based on the dragged direction. You can set the type. Specifically, if the drag direction of the touch input is the first direction, the first additional content may be set as a still picture type, and if the drag direction of the touch input is the second direction, the first additional content may be set as a moving picture type.
  • the controller 160 may focus the camera on a portion corresponding to the corresponding position.
  • 20 is a diagram illustrating an example of setting any one of at least one candidate as an insert point.
  • the user input for setting the insert point may be a touch input 2014 of any one of at least one candidate.
  • at least one candidate may be output to the live area, and when there is a touch input in any one of the output at least one candidate, the touch input candidate may be set as an insert point. That is, if there is a touch input, the coordinates and the effective time of the touch-input candidate may be the coordinates and the effective time of the insert point.
  • At least one candidate is set based on an insert point set in main content (hereinafter, second main content) different from main content (hereinafter, first main content) in which an insert point is set.
  • second main content main content
  • first main content main content
  • a candidate for an insert point of the first main content having coordinates corresponding to the coordinates of the insert point of the second main content and having an effective time equal to the effective time of the insert point of the second main content is output to the live area.
  • the first insert point candidate 2011 and the second insert point 2002 are in the coordinates of the first main content corresponding to the coordinates of the first insert point 2001 of the second main content.
  • the second insert point candidate 2012 is outputted to the coordinates of the first main content corresponding to the coordinates
  • the third insert point candidate 2013 is output to the coordinates of the first main content corresponding to the coordinates of the third insert point 2003 can be
  • the corresponding coordinates may be coordinates of the same position as the coordinates of the insertion point of the second main content.
  • the corresponding coordinates may be coordinates in which the coordinates of the insert point of the second main content are changed according to at least one of the sizes or ratios. there is.
  • the coordinates of the same or similar thing in the first main content are the first main content's insert point.
  • Candidates may be output to the live area.
  • the effective time of the candidate of the insertion point of the first main content may be the same as the time at which the same or similar object located at the coordinates of the candidate is output to the live area. For example, it may be specified from a point in time when there is a touch input for selecting a candidate to when an object at a location of the touch input disappears from the screen (frame out).
  • the similar object may include a similar object in at least one of color, size, type, shape, or angle. For example, if only the viewing angle is changed, the same object, an object only having a different color, or an object having a different size may be included.
  • At least one candidate may be set based on another insert point of the main content in which the insert point is set. For example, coordinates of an object that is the same as or similar to an object located at another insert point of the main content may be output to the live area as a candidate for the insert point of the main content.
  • the other insert points may include an insert point that overlaps with a candidate for an insert point of the main content but has different coordinates and an insert point whose effective time does not overlap with a candidate for an insert point of the main content.
  • the effective time of the candidate of the insert point of the main content may be the same as the time when the same or similar object located at the coordinates of the candidate is output to the live area.
  • a plurality of insert points may be set in the main content.
  • each of the plurality of insert points may be associated with additional content.
  • the additional contents associated with the plurality of insert points may be different from or the same as each other. In the latter case, it may be associated with a plurality of insert points and one additional content.
  • a plurality of additional content may be associated with one insert point.
  • the plurality of insert points may include insert points of different coordinates with overlapping effective times. Also, the plurality of insert points may include insert points having different effective times. In this case, since the effective times are different, the coordinates of the plurality of insert points may be the same.
  • 21 illustrates an embodiment in which a plurality of insert points are set by an input of simultaneously touching a plurality of points.
  • the plurality of inserts point when the user input for setting the insert point is an input ( 2101 , 2102 ) for simultaneously touching a plurality of points on the live area of the content recording screen for recording the main content, the plurality of inserts point can be set.
  • the plurality of insert points may be set based on the plurality of points touched on the live area.
  • the first insert point may be set to be the same as the coordinates of the first point among the plurality of points
  • the second insert point may be set to be the same as the coordinates of the second point.
  • At least one main content recording related command when at least one main content recording related command is executed by a user input for setting an insert point, at least one main content recording related command is executed to perform main content recording, and the insert point setting command is It may be executed to set a plurality of insert points in the recorded main content.
  • an additional content shooting command or at least one additional content recording related command when executed by a user input for setting an insert point, along with a plurality of insert point setting commands, an additional content shooting command or at least one addition A content recording related command may be executed.
  • the first additional content shooting/recording is performed by the input 2101 which touches the first point among the plurality of points
  • the second additional content shooting/recording is performed by the input 2102 which touches the second point.
  • the user input for setting the insert point may include an input of simultaneously touching a plurality of candidates.
  • a candidate selected by an input of touching a first point among the plurality of points may be set as the first insert point
  • a candidate selected by an input of touching a second point may be set as the second insert point.
  • the insert points may have an order.
  • the order may be set manually or automatically.
  • An example of manually setting may include arbitrarily setting an order by a user.
  • Examples of automatically setting may include automatically setting in the order of being close in distance based on at least one of upper, lower, left, and right.
  • FIG. 22 illustrates an embodiment in which a plurality of insert points are set by an input of sequentially touching a plurality of points.
  • the controller 160 may set the plurality of insert points.
  • the plurality of insert points may be set based on the plurality of points touched on the live area.
  • the first insert point may be set to be the same as the coordinates of the first point among the plurality of points
  • the second insert point may be set to be the same as the coordinates of the second point.
  • setting of the first insert point and recording start may be performed at the same location as the coordinates of the first point by the touch input 2201 of the first point for setting the insert point. Thereafter, by the touch input 2202 of the second point for setting the insert point, setting and recording pause (or stop recording) of the second insert point at the same position as the coordinates of the second point may be performed.
  • At least one of at least one main content recording related command or additional content capturing command may be executed by a user input for setting an insert point.
  • at least one of a main content recording command and an additional content recording related command may be executed by a user input for setting an insert point.
  • the input for setting the insert point may include an input for continuously touching a plurality of candidates.
  • the insert points may have an order.
  • the order may be set manually or automatically.
  • An example of manually setting may include arbitrarily setting an order by a user.
  • Examples of automatically setting may include automatically setting in the order of being close in distance based on at least one of upper, lower, left, and right.
  • the order may include automatically setting a plurality of consecutive touch inputs in the order in which they are input.
  • a last touch input among inputs for continuously touching a plurality of points for setting an insert point may be distinguished from other touch inputs.
  • the last touch input may be different from other touch inputs in at least one of a touch position, a touch pattern, a touch intensity (touch pressure), a touch time, or the number of touches.
  • the last touch input may be a long touch input input for more than a pre-set time.
  • a touch input for setting the last number of insert points may be the last touch input.
  • insert point setting may be terminated. In this case, a content capturing screen for capturing additional content may be output to the display.
  • a number indicating the insert point may indicate the order.
  • the first set number of insert points may be indicated as '1'
  • the second set number of insert points may be indicated as '2'. That is, the number of insert points set to the k-th may be expressed as 'k'.
  • k may represent a natural number of 1 or more.
  • the screen output to the display may output a main content recording screen, an additional content shooting/recording screen, or a separate main content or additional content screen.
  • the separate main content or additional content screen may include a case in which the main content or additional content is output to the entire screen of the display, output to a partial area, or output to a pop-up window.
  • only one insert point may be set by the inputs 2201 and 2202 that sequentially touch a plurality of points.
  • the setting of the first point as the first insert point is canceled by the touch input 2202 of the second point for setting the insert point, and , the second point may be set as the first insert point.
  • 16 illustrates an example of a user input for capturing additional content of an image type.
  • the controller 160 may execute an additional content capturing command to perform additional content capturing.
  • the user input for additional content shooting includes an input for touching the live area of the content output screen for additional content shooting (FIG. 16(a)) or an input for touching the execution area of a shooting command (FIG. 16(b)) can do.
  • the user input for capturing additional content may include an input for touching a live area within a pop-up window of a content output screen for capturing additional content ( FIG. 16( c ) ).
  • the additional content may be an enlarged photograph of a focused portion on the live area.
  • the command for capturing additional content may be executed by a user input for setting an insert point or a user input for capturing the main content.
  • an additional content capturing command is executed by a user input for setting an insert point
  • an additional content capturing command of a portion corresponding to the corresponding insert point may be executed.
  • the insert point setting input is performed on the content capturing screen for capturing the main content, a separate content capturing screen for additional content may not be output on the display.
  • additional content capturing associated with the plurality of insert points may be performed according to the order of the plurality of insert points. For example, after capturing of the additional content to be linked to the first insert point corresponding to the first order is performed, the additional content to be linked to the second insert point corresponding to the second order may be captured.
  • the shooting of the additional content is performed after the setting of the insert point is performed, but the setting of the insert point may be performed after the shooting of the additional content is performed first.
  • An insert point setting command for setting an insert point in the first additional content may be executed by a user input for capturing the additional content (the first additional content).
  • the insertion point of the first additional content may be linking the first additional content and the second additional content.
  • the first additional content may serve as the main content in relation to the second additional content.
  • 17 illustrates an example of a user input for recording additional content of a video type.
  • the controller 160 may execute at least one additional content recording related command (eg, a recording start command and a recording end command) to perform additional content recording.
  • the recording-related command may include an additional content recording start command, a recording pause command, a recording resume command, and a recording end command.
  • the user input for additional content recording includes an input for touching the live area of the content recording screen for additional content recording (eh 17(a)) or an input for touching the execution area of a recording-related command (FIG. 17(b)).
  • the user input for additional content recording may include an input for touching a live area in a pop-up window of a content output screen for additional content recording (FIG. 17(c)).
  • the additional content may be an enlarged recording of a focused portion on the live area.
  • the command related to recording of the additional content may be executed by a user input for setting an insert point or a user input for recording the main content.
  • an additional content recording start command is executed by a user input for setting an insert point
  • an additional content recording start command of a portion corresponding to the corresponding insert point may be executed.
  • the insert point setting input is performed on the content capturing screen for capturing the main content, a separate content capturing screen for additional content may not be output on the display.
  • additional content recording associated with the plurality of insert points may be performed according to the order of the plurality of insert points. For example, after recording of the additional content to be linked to the first insert point corresponding to the first order is performed, the recording of the additional content to be linked to the second insert point corresponding to the second order may be performed.
  • the recording of the additional content is preferably performed after the setting of the insert point is performed, but the setting of the insert point may be performed after the recording of the additional content is performed first.
  • An insert point setting command for setting an insert point in the first additional content may be executed in response to a user input for recording the additional content (the first additional content).
  • the insertion point of the first additional content may be linking the first additional content and the second additional content.
  • the first additional content may serve as the main content in relation to the second additional content.
  • Associating the additional content based on the set insert point may include storing the additional content or link information of the additional content in the main content. Alternatively, it may include storing link information of the additional content separately from the main content. In this case, link information of the additional content may be obtained from the database based on key data of the main content.
  • the main content may be associated with at least one additional content.
  • any one of the plurality of insert points may be associated with an object corresponding to the one of the at least one additional content and the additional content having the highest matching rate.
  • the plurality of insert points may include those set to be the same as the coordinates of each position selected by simultaneous or sequential user input.
  • the matching rate may include a similarity rate between the additional content and the object of any one of the plurality of insert points.
  • the similarity rate may be determined in consideration of at least one of a color, an angle, a shape, a shape, a type, and a size.
  • the output form of the main content may include outputting the main content and an insertion point of the main content to the display.
  • additional content associated with the insert point with the input may be output. Specific details thereof have been described above, and thus will be omitted.
  • the controller 160 may set a second insert point for the main content and the first additional content linked by the first insert point.
  • the second insert point may be a link between the first additional content and the second additional content.
  • the first additional content may be main content
  • the second additional content may be additional content linked to the main content. Accordingly, the above-described content regarding the main content and the additional content may be directly applied to the first additional content (the role of the main content) and the second additional content (the role of the additional content).
  • the user input for setting the second insertion point of the first additional content includes an input of touching a live area of a content recording screen for recording the first additional content or an input of touching an execution area of a recording related command. can do.
  • a user input for setting the second insert point may be distinguished from a user input for recording the first additional content.
  • a command related to recording of at least one first additional content may be executed in response to a user input for setting the second insert point.
  • a second additional content shooting command or at least one second additional content recording related command may be executed by a user input for setting the second insert point.
  • a second additional content recording command or at least one second additional content recording related command may be executed along with at least one first additional content recording related command by a user input for setting the second insert point.
  • the user input for setting the second insert point may include an input of touching any one of at least one candidate.
  • the first additional content may include a plurality of insert points, and the plurality of insert points may be set by simultaneously or sequentially touching a plurality of points in the live area.
  • the method for controlling a terminal in connection with capturing content may be implemented as a computer-readable recording medium including program instructions for performing various computer-implemented operations.
  • the computer-readable recording medium may include program instructions, local data files, local data structures, and the like alone or in combination.
  • the recording medium may be specially designed and configured for the embodiment of the present disclosure, or may be known and used by a person skilled in the art of computer software. Examples of the computer-readable recording medium include hard disks, magnetic media such as floppy disks and magnetic tapes, optical recording media such as CD-ROMs and DVDs, magneto-optical media such as floppy disks, and ROMs, RAMs, flash memories, and the like.
  • Hardware devices specially configured to store and execute such program instructions are included.
  • the recording medium may be a transmission medium such as an optical or metal wire or waveguide including a carrier wave for transmitting a signal designating a program command, a local data structure, or the like.
  • Examples of program instructions may include high-level language codes that can be executed by a computer using an interpreter or the like as well as machine language codes such as those generated by a compiler.
  • the terminal of the present disclosure sets an insert point while shooting (or recording) various contents, and linking additional content that is subsequently photographed (or recorded) based on the insert point, in various indoor and outdoor environments. can shoot (or record),

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Un terminal de la présente divulgation comprend : une caméra ; un affichage pour délivrer en sortie une région en direct générée par traitement de données entrées en temps réel à partir de la caméra ; et une unité de commande pour capturer un contenu principal par l'intermédiaire de la caméra en réponse à une première entrée d'utilisateur qui sélectionne une première position de la région en direct, et par la suite réaliser une commande de sorte que le premier contenu supplémentaire capturé ou enregistré soit lié à un point d'insertion dans le contenu principal, l'unité de commande réalisant une commande pour délivrer en sortie, dans un état dans lequel le contenu principal est délivré en sortie, le premier contenu supplémentaire en réponse à une seconde entrée d'utilisateur qui sélectionne le point d'insertion dans le contenu principal, et la position du point d'insertion dans le contenu principal peut être réglée pour être identique à la première position sélectionnée.
PCT/KR2020/012243 2020-09-10 2020-09-10 Terminal, son procédé de commande, et support d'enregistrement dans lequel est enregistré un programme pour la mise en œuvre du procédé WO2022054988A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2020/012243 WO2022054988A1 (fr) 2020-09-10 2020-09-10 Terminal, son procédé de commande, et support d'enregistrement dans lequel est enregistré un programme pour la mise en œuvre du procédé

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2020/012243 WO2022054988A1 (fr) 2020-09-10 2020-09-10 Terminal, son procédé de commande, et support d'enregistrement dans lequel est enregistré un programme pour la mise en œuvre du procédé

Publications (1)

Publication Number Publication Date
WO2022054988A1 true WO2022054988A1 (fr) 2022-03-17

Family

ID=80632218

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2020/012243 WO2022054988A1 (fr) 2020-09-10 2020-09-10 Terminal, son procédé de commande, et support d'enregistrement dans lequel est enregistré un programme pour la mise en œuvre du procédé

Country Status (1)

Country Link
WO (1) WO2022054988A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130071794A (ko) * 2011-12-21 2013-07-01 삼성전자주식회사 디지털 촬영 장치 및 이의 제어 방법
KR20160029536A (ko) * 2014-09-05 2016-03-15 엘지전자 주식회사 이동 단말기 및 그것의 제어방법
KR20190035026A (ko) * 2017-09-25 2019-04-03 네이버 주식회사 동영상 컨텐츠 제공 방법, 장치 및 컴퓨터 프로그램
KR20190075654A (ko) * 2017-12-21 2019-07-01 삼성전자주식회사 복수 개의 카메라를 포함하는 전자 장치 및 그 동작 방법
KR20200084428A (ko) * 2018-12-24 2020-07-13 삼성전자주식회사 동영상을 제작하는 방법 및 그에 따른 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20130071794A (ko) * 2011-12-21 2013-07-01 삼성전자주식회사 디지털 촬영 장치 및 이의 제어 방법
KR20160029536A (ko) * 2014-09-05 2016-03-15 엘지전자 주식회사 이동 단말기 및 그것의 제어방법
KR20190035026A (ko) * 2017-09-25 2019-04-03 네이버 주식회사 동영상 컨텐츠 제공 방법, 장치 및 컴퓨터 프로그램
KR20190075654A (ko) * 2017-12-21 2019-07-01 삼성전자주식회사 복수 개의 카메라를 포함하는 전자 장치 및 그 동작 방법
KR20200084428A (ko) * 2018-12-24 2020-07-13 삼성전자주식회사 동영상을 제작하는 방법 및 그에 따른 장치

Similar Documents

Publication Publication Date Title
WO2016036192A1 (fr) Appareil d'affichage d'images et procédé d'affichage d'images
WO2017090837A1 (fr) Appareil de photographie numérique et son procédé de fonctionnement
WO2017043784A1 (fr) Terminal mobile et procédé de commande dudit terminal
WO2014042445A1 (fr) Appareil d'affichage et procédé de commande d'appareil d'affichage
WO2016018004A1 (fr) Procédé, appareil et système de fourniture de contenu traduit
WO2016099141A9 (fr) Procédé de fabrication et de reproduction de contenu multimédia, dispositif électronique permettant de le mettre en œuvre, et support d'enregistrement sur lequel est enregistré le programme permettant de l'exécuter
WO2015060621A1 (fr) Procédé et dispositif pour émettre des données, et procédé et dispositif pour recevoir des données
WO2015030411A1 (fr) Terminal mobile et procédé de commande associé
WO2013022218A2 (fr) Appareil électronique et procédé pour fournir une interface utilisateur de celui-ci
WO2013022222A2 (fr) Procédé de commande d'appareil électronique basé sur la reconnaissance de mouvement, et appareil appliquant ce procédé
WO2013022223A2 (fr) Procédé permettant de commander un appareil électronique sur la base de la reconnaissance vocale et de la reconnaissance de mouvement, et appareil électronique mettant en œuvre ce procédé
WO2015030556A1 (fr) Appareil et procédé d'affichage de tableaux dans un dispositif électronique
WO2013022221A2 (fr) Procédé de commande d'un appareil électronique basé sur la reconnaissance vocale et sur la reconnaissance de mouvement, et appareil électronique appliquant ce procédé
WO2016018062A1 (fr) Procédé et dispositif de distribution de contenu
WO2014129864A1 (fr) Appareil de fourniture d'un curseur dans des dispositifs électroniques et procédé associé
WO2012150755A1 (fr) Procédé, appareil et support d'enregistrement destinés à fournir un jeu tactile
WO2016093510A1 (fr) Appareil d'affichage et procédé d'affichage
WO2021162320A1 (fr) Dispositif électronique et procédé d'utilisation d'écran à grande vitesse d'un dispositif électronique
WO2010120120A2 (fr) Procédé permettant une interaction d'utilisateur dans un laser, et dispositif associé
WO2021117953A1 (fr) Appareil d'affichage
WO2018088667A1 (fr) Dispositif d'affichage
WO2020017936A1 (fr) Dispositif électronique et procédé de correction d'image sur la base d'un état de transmission d'image
WO2022050785A1 (fr) Dispositif d'affichage et procédé de fonctionnement de celui-ci
WO2021158058A1 (fr) Procédé pour fournir un filtre et dispositif électronique le prenant en charge
WO2021230499A1 (fr) Dispositif électronique pliable et procédé de commande d'affichage de notification associé

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20953390

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07-08-2023)

122 Ep: pct application non-entry in european phase

Ref document number: 20953390

Country of ref document: EP

Kind code of ref document: A1