CN115004681A - Display method - Google Patents

Display method Download PDF

Info

Publication number
CN115004681A
CN115004681A CN202080094541.9A CN202080094541A CN115004681A CN 115004681 A CN115004681 A CN 115004681A CN 202080094541 A CN202080094541 A CN 202080094541A CN 115004681 A CN115004681 A CN 115004681A
Authority
CN
China
Prior art keywords
setting
area
image
display
recording
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202080094541.9A
Other languages
Chinese (zh)
Inventor
西尾祐也
田中康一
北川润也
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujifilm Corp
Original Assignee
Fujifilm Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujifilm Corp filed Critical Fujifilm Corp
Publication of CN115004681A publication Critical patent/CN115004681A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B17/00Details of cameras or camera bodies; Accessories therefor
    • G03B17/18Signals indicating condition of a camera member or suitability of light
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B5/00Adjustment of optical system relative to image or object surface other than for focusing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • H04N23/635Region indicators; Field of view indicators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • H04N5/772Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure

Abstract

The invention improves the convenience for users in the display method of cutting part of the shot image and switching the image of the cut image. A display method for displaying an image captured by an imaging device includes: a setting step of setting a plurality of setting regions in a reference region that is a photographing region of a reference image; a selection step of selecting a recording area as an area for recording a video to be recorded from the plurality of setting areas; a switching step of, after the selection step is performed, reselecting the recording area from the plurality of setting areas and switching the recording area; and a display step of displaying the recording image, the reference image, and the marks indicating the positions of the plurality of setting regions in the reference image.

Description

Display method
Technical Field
The invention relates to an image display method.
Background
The following techniques have been developed: an image of any of a plurality of objects in an image captured by one imaging device is switched and stored. As an example thereof, for example, the technology described in patent document 1 can be cited.
In the technique described in patent document 1, a plurality of different image areas are set in the imaging range of the imaging device, and an image of one area is recorded. Further, in the technique described in patent document 1, the area of the video to be recorded can be changed (switched) among a plurality of areas. This makes it possible to obtain a zoom-enlarged image for each of a plurality of subjects in a captured image, for example, with the imaging device fixed.
Prior art documents
Patent literature
Patent document 1: japanese patent laid-open publication No. 2016-92467
Disclosure of Invention
Technical problem to be solved by the invention
An embodiment of the present invention has been made in view of the above circumstances, and an object thereof is to improve convenience for a user in a display method for cutting a part of a captured image and switching an image of the cut image.
Means for solving the technical problem
In order to achieve the above object, one embodiment of the present invention is a display method for displaying an image captured by an imaging device, the display method including: a setting step of setting a plurality of setting regions in a reference region that is a photographing region of a reference image; a selection step of selecting a recording area as an area for recording a video to be recorded from the plurality of setting areas; a switching step of, after the selection step is performed, reselecting the recording area from the plurality of setting areas and switching the recording area; and a display step of displaying the recording image, the reference image, and a mark indicating each position of the plurality of setting regions in the reference image.
The mark may include a boundary of the setting region, and the image within the boundary in the reference image may be highlighted in the display step.
At the start of the setting step, the flag may be displayed in a state where the setting regions do not overlap with each other.
In the display step, a mark indicating the position of the recording area in the reference video and a mark indicating the position of the standby area, which is a set area other than the recording area, may be displayed in different manners.
In the display step, a flag indicating the position of the movement region of the setting region that moves as the object moves following the object may be displayed in a manner different from a flag indicating the position of the setting region other than the movement region.
In the display step, the recorded video and the reference video may be displayed on different displays.
The display step may further include: a 1 st display step of displaying the recorded image, the reference image, and the mark; and a 2 nd display step of displaying the recording image without displaying the reference image and the mark, and performing one of the 1 st display step and the 2 nd display step designated by the user.
In the 2 nd display step, the recording video and the standby video, which is a video in a setting area other than the recording area, may be displayed on different displays.
The number of marks displayed in the display step may be variable, and identification information of the marks may be displayed for each mark in the display step. In this case, when the number of displayed marks is changed, the identification information set in each mark is preferably maintained before and after the change in the number of displayed marks.
Another embodiment of the present invention is a display method for displaying a video captured by an imaging device, the display method including: a setting step of setting a plurality of setting regions in a reference region that is a photographing region of a reference image; a selection step of selecting a recording area as an area for recording a video to be recorded from the plurality of setting areas; a switching step of, after the selection step is performed, reselecting the recording area from the plurality of setting areas and switching the recording area; a determination step of determining priorities for a plurality of standby areas when setting areas other than the recording area as the standby areas; and a display step of displaying a plurality of standby images that are images of the plurality of standby areas, wherein when the number of the plurality of standby areas set in the reference area reaches a set value related to the number of displayed standby images, the display step displays a standby image of a standby area selected based on the priority among the plurality of standby areas, or displays the plurality of standby images in a size corresponding to the priority.
In the determining step, the object in the standby images may be detected, and the priorities of the standby areas may be determined based on information on the detected object.
In the determination step, the results of the standby areas that have been selected as the recording areas in the past may be specified, and the priorities of the plurality of standby areas may be determined based on information on the specified results.
Another embodiment of the present invention is a display method for displaying a video captured by an imaging device, the display method including: a setting step of setting a plurality of setting regions in a reference region that is a photographing region of a reference image; a selection step of selecting a recording area as an area for recording a video to be recorded from the plurality of setting areas; a switching step of, after the selection step, reselecting the recording area from the plurality of setting areas and switching the recording area; a display step of displaying a plurality of setting images that are images of a plurality of setting areas; and a control step of executing a control process related to the change of the reference region after at least the reference region is determined.
Also, the control process may include at least one of a process for controlling the change of the reference region, a process for urging the change of the reference region, and a process for notifying proposal information related to the change of the reference region.
When the imaging apparatus includes an optical device for zooming, the control process may be executed in association with zooming by the optical device.
The control process may be a process related to a change of the reference area after the time when the recording start instruction is received.
Another embodiment of the present invention is a display method for displaying a video captured by an imaging device, the display method including: a setting step of setting a plurality of setting regions in a reference region that is a photographing region of a reference image; a selection step of selecting a recording area as an area for recording a video to be recorded from the plurality of setting areas; a switching step of, after the selection step is performed, reselecting the recording area from the plurality of setting areas and switching the recording area; and a display step of displaying a setting image that is an image of the setting area, wherein at least one of the display number of the setting images and the display size of the setting images displayed in the display step is determined based on at least one of the resolution and the aspect ratio of the recording image.
The number of the setting images to be displayed in the displaying step may be determined based on at least one item adjusted according to a recording format of the recording image.
Drawings
Fig. 1 is a perspective view showing an example of an external appearance of an imaging device according to an embodiment of the present invention.
Fig. 2 is a rear view showing the back side of the imaging device according to the embodiment of the present invention.
Fig. 3 is a block diagram showing a configuration of an imaging device according to an embodiment of the present invention.
Fig. 4 is an explanatory diagram of electronic hand shake correction.
Fig. 5 is a diagram showing a relationship between the reference region and the setting region.
Fig. 6 is a diagram showing a relationship between a reference video and a recorded video.
Fig. 7 is a diagram showing recorded videos before and after switching among moving images of the recorded videos.
Fig. 8 is a diagram showing a reference image when a new subject enters the shooting region.
Fig. 9 is a diagram showing a state in which a moving region follows a moving object.
Fig. 10 is a diagram showing a flow of an image display flow.
Fig. 11 is a diagram showing transition of the UI screen in the setting step.
Fig. 12 is a diagram showing a state in which a setting area is set when the previous usage pattern is selected.
Fig. 13 is a diagram showing a selection screen of a preset pattern.
Fig. 14 is an explanatory diagram of user operation related to the position of the setting region.
Fig. 15 is a diagram showing a setting area set based on the AF position determination pattern.
Fig. 16 is a diagram showing a characteristic portion in a reference image.
Fig. 17 is a diagram showing a state in which a setting region is set in a characteristic portion.
Fig. 18 is a diagram showing an example of a display screen when the display step is performed.
Fig. 19 is a diagram showing an example of a display screen when the 2 nd display step is performed.
Fig. 20 is a diagram showing a modification of the display screen in the 2 nd display step.
Fig. 21 is a diagram (1) showing an example of a screen when the number of standby areas reaches the maximum display number of standby videos.
Fig. 22 is a diagram showing a flow of a display process when the number of standby areas reaches the maximum number of standby video images to be displayed.
Fig. 23 is a diagram (2) showing an example of a screen when the number of standby areas reaches the maximum display number of standby videos.
Fig. 24 is a diagram showing a modified example of the screen when the number of standby areas reaches the maximum number of standby images to be displayed.
Fig. 25 is a flowchart showing a process performed when the recording step is performed.
Fig. 26 is a diagram showing an example of a screen before the implementation of the changing step.
Fig. 27 is a diagram showing an example of a screen when the number of displayed images is increased by the changing step.
Fig. 28 is a diagram showing an example of a screen when a movement region overlaps with another setting region.
Fig. 29 is a diagram showing an example of a screen when the overlapped moving area and other setting areas are combined into 1 setting area.
Fig. 30 is a diagram showing an example of a screen when the display of the setting image in a certain setting area is stopped.
Fig. 31 is a diagram showing an example of a screen on which proposal information urging a change of the reference region is displayed.
Detailed Description
A preferred embodiment of the present invention (hereinafter, referred to as the present embodiment) will be described in detail with reference to the accompanying drawings.
The present embodiment relates to a method for displaying an image using the imaging device 10 shown in fig. 1 to 3. However, the embodiments described below are merely examples for facilitating understanding of the present invention, and do not limit the present invention. That is, the present invention can be modified or improved from the embodiments described below without departing from the spirit thereof. And, equivalents thereof are included in the present invention.
[ basic Structure of photographing apparatus ]
The imaging device 10 is, for example, a digital camera for capturing images.
Here, unless otherwise specified, "movie" refers to live video (live view image), that is, video taken in real time. In addition, the video in the present embodiment is mainly a moving image. That is, the image pickup device 10 picks up moving images at a predetermined frame rate.
The image pickup device 10 is, for example, a lens-interchangeable type camera, and includes an image pickup device body 12 and an image pickup lens 14, as shown in fig. 1 and 2. The photographing lens 14 is replaceably attached to the mount 13 of the photographing device body 12. However, the present invention is not limited to this, and the imaging device 10 may be a lens-integrated type.
(photographic lens)
As shown in fig. 3, the photographing lens 14 includes an optical unit 17, a diaphragm 20, a zoom driving unit 21, a focus driving unit 22, and a diaphragm driving unit 23.
The optical unit 17 includes an optical device 18 (e.g., zoom lens) for zooming. By moving the optical device 18 in the direction of the optical axis L1, the photographing region of the photographing apparatus 10 is enlarged or reduced to change the zoom. That is, the image pickup apparatus 10 has an optical zoom function by the optical device 18.
The imaging device 10 of the present embodiment has an electronic zoom (digital zoom) function of electronically processing and enlarging an object in a video, in addition to an optical zoom function.
The optical unit 17 includes an optical device 19 (e.g., a focus lens) for focusing. By moving the optical device 19 in the direction of the optical axis L1, the in-focus position of the photographing lens 14 is changed, thereby adjusting the focus.
The imaging device 10 of the present embodiment has an Autofocus (AF) function. That is, when the user performs a predetermined operation (for example, an instruction operation to start recording an image) during image capturing, the focusing drive unit 22 operates to move the optical device 19. As a result, the focus position is automatically adjusted so as to focus on a predetermined portion of the image.
Also, the optical unit 17 includes a wide-angle lens, a super wide-angle lens, a 360-degree lens, an anamorphic lens, or the like. As a result, the imaging device 10 can capture an image at a wide angle of view in the lateral direction (width direction) of the imaging device 10. The angle of view of the imaging device 10 changes with the execution of the optical zoom or the like. The maximum viewing angle (total viewing angle) of the image capturing apparatus 10 is determined according to design specifications of the optical unit 17 and the imaging device 40 described later.
That is, the number of the optical units 17 provided in the image capturing apparatus 10 is not limited to 1, and a plurality of optical units 17 having different angles of view may be provided.
The diaphragm 20 has an aperture shape configured to be variable, and the aperture shape is adjusted by a diaphragm driving unit 23. By adjusting the aperture shape, the amount of light exposure is adjusted by changing the aperture amount with respect to the incident light to the photographing lens 14.
(photographic apparatus main body)
As shown in fig. 1 and 2, the photographing apparatus body 12 includes an operation unit including an operation button 30. A release button 26, which is one of the operation units, is disposed on the upper surface of the photographing apparatus body 12. When the release button 26 is pressed, recording of the image captured by the imaging device 10 or the cut-out image (strictly, a recorded image described later) is started.
Here, the photographing area of the photographing apparatus 10, that is, the range of the angle of view of the photographing apparatus 10 is determined when the user's recording start instruction is received. The imaging area changes according to, for example, the on/off state of the electronic camera shake correction. When the electronic hand shake correction is in the off state at the reception timing of the recording start instruction, the imaging area is usually the imaging area of the full size (total angle of view).
The operation of the user to instruct the start of recording of an image is not limited to pressing the release button 26, and may be an operation of touching a predetermined position on a display 28 on the rear surface described later.
Hereinafter, an image captured by the imaging device 10 at the angle of view at this time will be referred to as a "reference image". The imaging region of the reference image is referred to as a "reference region".
A display shown in fig. 2 (hereinafter, also referred to as a rear display 28) is provided on the rear surface of the photographing apparatus body 12. The rear Display 28 is formed of, for example, an LCD (Liquid Crystal Display), an Organic EL (Organic Electroluminescence) Display, an LED (Light Emitting Diode) Display, an electronic paper, or the like.
The display 28 on the back side displays a reference image, a recorded image described later, and the like. In this way, the imaging device 10 having a display is used as a video display device because it has a video display function.
The display 28 on the back side can display information other than video images, for example, advice or suggestion information related to image capturing (see fig. 18).
As shown in fig. 2 and 3, the imaging device body 12 of the present embodiment includes an electronic viewfinder (EVF in fig. 3) 29 as a sub-display. The same image can be displayed on the display 28 on the rear surface and the electronic viewfinder 29, and images different from each other can be displayed. That is, the photographing apparatus may not be provided with the electronic viewfinder 29. Further, a video may be displayed on the electronic viewfinder 29, and no video may be displayed on the rear display 28.
A touch panel 36 for detecting a user operation is provided inside the display 28 on the back surface or on an exposed surface of the display 28. For example, when the user touches a predetermined position of the rear display 28, the touch panel 36 detects the touched position and outputs the detection result to the control unit 46 described later.
As shown in fig. 3, the photographing apparatus body 12 is further provided with a shutter 38, an imaging element 40, an analog signal processing circuit 44, a control section 46, an internal memory 50, a card slot 52, and a buffer memory 56.
The imaging element 40 is an image sensor, receives light passing through the photographing lens 14, converts a light-received image thereof into an electric signal (video signal), and outputs the electric signal. As the imaging element 40, a solid-state imaging element such as a CCD (Charged Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor Image Sensor) can be used.
The control unit 46 controls each unit of the imaging device 10, and executes various processes related to image capturing, display, recording, and the like. As shown in fig. 3, the control unit 46 includes a controller 47 and an image processing unit 48.
The control unit 46 includes 1 or more processors, and the control unit 46 is configured by cooperation of the processors and the control program. The Processor may be, for example, a CPU (Central Processing Unit), an FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), a GPU (Graphics Processing Unit), or another IC (Integrated Circuit). Alternatively, these components may be combined to form a processor.
The processor may be a function of the entire control unit 46 including the controller 47 and the image processing unit 48, as represented by SoC (System on Chip) or the like, and may be 1 IC (Integrated Circuit) Chip.
The hardware configuration of each processor described above can be realized by a circuit (circuit) in which circuit elements such as semiconductor elements are combined.
The controller 47 controls each part of the imaging apparatus 10 based on a user operation or according to a predetermined control program. For example, the controller 47 controls the imaging element 40 and the analog signal processing circuit 44 so that images are taken at a predetermined frame rate.
The controller 47 controls the driving units 21 to 23, the imaging element 40, the analog signal processing circuit 44, and the like so that the imaging conditions are set to desired conditions. The photographing conditions include focus (focus), exposure amount, white balance, and the like. That is, the controller 47 realizes an Auto Focus (AF) function, an Auto Exposure (AE) function, and an Auto White Balance (AWB) function, which are automatic adjustment functions of the imaging apparatus 10.
When receiving a user's recording start instruction, the controller 47 controls the video processing unit 48 to start recording of a video.
The image processing unit 48 converts the analog image signal sent from the analog signal processing circuit 44 into digital image data, and performs gamma correction, white balance correction, and the like on the digital image data. The processed digital video data is compressed in a compression format according to a predetermined specification.
The video processing unit 48 generates compressed digital image data at a predetermined frame rate during shooting, and acquires a video (strictly speaking, a frame image) from the data. The image acquired at this time corresponds to a reference image captured in the reference region.
The image processing unit 48 cuts out a recording image from the reference image under the control of the controller 47. The recorded video is a video with a smaller angle of view than the reference video, and is a video of a partial area of the reference area. The recorded video is generated at the same frame rate as the reference video, and is displayed on, for example, a display 28 or an electronic viewfinder 29 on the back side.
Then, the recorded image is recorded on the recording medium. As a result, a video document in which a video is recorded is created. In this case, the resolution (number of pixels) and the aspect ratio of the recorded video are determined according to the recording format (moving picture format) of the recorded video.
The recording format is not limited to this, and can be freely determined.
While the electronic camera shake correction is effective (on state), the image processing unit 48 extracts an extracted image having a size slightly smaller than that of the pre-corrected image from the pre-corrected image captured at the total field angle, as shown in fig. 4. Then, when the extracted image is shifted by vibration or the like, the image processing unit 48 performs electronic camera shake correction to shift the position of the extracted region in the image before correction (for example, shift from the position of the solid line to the position of the broken line in fig. 4).
In addition, during the electronic camera shake correction on state, the extracted image becomes the reference image.
Hereinafter, unless otherwise specified, the operation and processing of the controller 47 and the image processing unit 48 will be described as the operation and processing of the control unit 46.
The memory card 54, which is detachably mounted on the internal memory 50 and the card slot 52 built in the imaging device main body 12, is a recording medium, and records, for example, a recorded image.
The internal memory 50 and the memory card 54 may be provided in a device (external) different from the imaging apparatus main body 12. In this case, the control unit 46 may record a recording image on an external recording medium by wire or wirelessly.
The buffer memory 56 functions as a work memory of the control unit 46.
[ recording and switching of recorded image ]
The imaging device 10 records a recording image cut from the reference image on a recording medium. The imaging device 10 can switch the recording video (switching) by changing the cutout region in the reference video. Such a function is explained with reference to fig. 5.
In fig. 5, for convenience of explanation, the subject in the video is not shown.
When the imaging device 10 starts imaging, the reference image is displayed on a display (e.g., the rear display 28). In this case, generally, the image of the total angle of view is the reference image, and the region surrounded by the outermost edge in fig. 5 is the reference region As. During the electronic camera shake correction on state, the extracted image extracted from the image of the total field angle becomes the reference image, and the area surrounded by the dashed line frame in fig. 5 becomes the reference area At.
When the user performs a predetermined operation before or during the start of imaging, the control unit 46 sets the setting region Ar in the reference regions As and At. The setting region Ar is a part of the reference regions As, At, and for example, the object in the reference image is reflected in the setting region Ar (see fig. 6).
Also, the position and size of the setting area Ar can be adjusted automatically or based on user operation. The user can change the position and size of the setting area Ar in which the subject is projected, for example, according to the position and size of the subject in the reference image by performing a predetermined operation or the like on the screen.
Here, in the present embodiment, the aspect ratio of the setting area Ar is determined to be a value corresponding to the recording format (moving picture format) of the recorded video. Therefore, when the size of the setting area Ar is changed, the size is changed in a state where the aspect ratio of the setting area Ar is constant. However, the aspect ratio of the setting area Ar may be determined independently of the recording format of the recorded image, and in this case, the size of the setting area Ar may be changed while changing the aspect ratio.
As shown in fig. 5, when the setting area Ar is set, the mark f is displayed in the reference image displayed on the display at that point in time.
The flag f indicates the position of the setting area Ar in the reference image, including the boundary of the setting area Ar. The boundary of the setting area Ar is set for identifying (distinguishing) the setting area Ar and its periphery.
Examples of the flag f including the boundary include a frame surrounding the setting area Ar and a graphic object such as a pointer indicating a representative position of the setting area Ar. Further, differences in lightness, chromaticity, shading, and the like between the setting area Ar and the periphery thereof also correspond to the flag f. In addition, the setting area Ar may be included in the category of the flag f as long as the setting area Ar can be distinguished from the periphery thereof.
Hereinafter, the mark f formed by a colored rectangular frame will be described.
As shown in fig. 5, a plurality of setting regions Ar can be set in the reference image. In this case, the flags f are displayed in the respective setting areas Ar. Then, the identification information (the number shown on the upper right of the mark f in fig. 5) is displayed on each mark f.
The identification information is information that the control unit 46 assigns individually to each flag f according to a predetermined assignment rule. The identification information is displayed simultaneously with the marks f, whereby the user can distinguish the marks f from each other.
The identification information is not limited to this, and may be a number, a character, or a symbol, or may be a display color of the mark f that is distinguished for each mark f. When the number is identification information, the number is preferably a serial number.
When the setting of the setting area Ar is completed, the user selects the recording area Ap, which is an area where the image is recorded, from the plurality of setting areas Ar that have been set. The control unit 46 selects the recording area Ap from the plurality of setting areas Ar in accordance with a user's selection operation. Thus, the recording area Ap, which is the set area Ar of the image in the recording area, is determined.
When determining the recording area Ap, the control unit 46 displays, in a manner different from each other, the flag f indicating the position of the recording area Ap and the flags f indicating the positions of the other setting areas Ar, among the flags f for each setting area Ar displayed in the reference image. For example, the display form of the mark f includes the color, brightness, density (gradation), shape, edge line thickness, line type, and presence or absence of blinking display of the mark f.
In the case shown in fig. 5, the mark f indicating the position of the recording area Ap is displayed in a different color and thickness from the other marks f.
When the user instructs to start recording after determining the recording area Ap, recording of an image of the recording area Ap is started (hereinafter referred to as a recording image). At this point, the reference region (i.e., the orientation and the angle of view of the imaging device 10) is determined.
The reference area is determined once when the recording start instruction is received. After the reference area is determined, the user can freely change the reference area. That is, when the user changes the reference region, it is preferable to change the position of the setting region according to the position of the subject, as described later.
As described above, by recording a part of the reference image as a recording image, for example, a zoom image of a portion of the reference image that is focused on can be recorded.
The recorded video is a video with a smaller viewing angle than the reference video, but if the reference video has a high image quality (for example, if the number of pixels is 1000 ten thousand or more), it is sufficient to have a high image quality.
The number of pixels of the reference image is not particularly limited, and the lower limit thereof is preferably 1000 ten thousand or more, and more preferably 6000 ten thousand or more. The upper limit of the number of pixels is preferably 10 hundred million or less, and more preferably 5 hundred million or less. If the number of pixels is greater than the lower limit, the visibility of the recorded image is ensured. If the number of pixels is smaller than the upper limit, the data amount of the reference image is reduced, and the processing by the control unit 46 can be speeded up.
In the present embodiment, after selecting 1 setting area Ar determined as the recording area Ap from the plurality of setting areas Ar, the recording area Ap can be switched to another setting area Ar during image recording. As a result, the video of the recording area Ap before switching and the video of the recording area Ap after switching can be recorded separately. The details thereof will be explained with reference to fig. 6.
The imaging device 10 captures a plurality of objects (persons) in a certain scene (position), and sets a setting region Ar for each object as shown in fig. 6. In this case, 1 setting area Ar selected by the user from the plurality of setting areas Ar becomes the recording area Ap. In the case shown in fig. 6, 4 setting areas Ar are set, and the setting area Ar surrounded by the flag f of the identification information "1" is the recording area Ap.
When the user instructs to start recording under the above-described conditions, the image of the setting area Ar, which is the recording area Ap, is recorded at that time.
Then, when the user instructs to change the recording area Ap to another setting area Ar (for example, the setting area Ar surrounded by the flag f of the identification information "2") during the image recording, the recording area Ap is reselected and the recorded image is switched.
As described above, in the present embodiment, during image recording, it is possible to switch recording images by reselecting the recording area Ap. Thus, it is possible to capture a zoom image of each object without installing each camera for each of a plurality of objects.
When switching the recording video, the control unit 46 records the recording video before switching and the recording video after switching, and creates 1 moving image by combining these videos. As a result, as shown in fig. 7, a document (video document) in which the moving image of the subject in the video is changed with the switching of the recorded video can be acquired.
However, the present invention is not limited to this, and the recording video before switching and the recording video after switching may not be combined and recorded as separate document data. The reference video and the standby area described later may be recorded as separate document data.
In the present embodiment, as shown in fig. 5 and 6, a mark f indicating a position in the reference image is displayed for each of a plurality of setting areas Ar including the recording area Ap. This enables the user to grasp the position of each setting area Ar during imaging. That is, the display method of the present embodiment improves convenience (usability) from the viewpoint of easily grasping the position of the setting region Ar. This effect is particularly effective in a configuration in which the recording area Ap can be switched among a plurality of setting areas Ar. That is, by displaying the flag f in each setting area Ar, it is possible to easily grasp which image of the reference images the recording image after switching is.
When the number of subjects in the reference image changes, for example, the supplementary explanation setting area Ar can be changed in number (set number) automatically or by user operation in accordance with the change. For example, as shown in fig. 8, when a new subject enters the reference image, a new setting area Ar can be added to the position of the subject. Conversely, when the subject in the reference image moves out of the reference image, the set area Ar, which is the image area of the subject, can be automatically eliminated.
When the object in a certain setting area Ar (for example, in fig. 8, the setting area Ar surrounded by the flag f of the identification information "5") moves, the setting area Ar can be displaced so as to follow the movement of the object. This allows tracking of the moving object in the reference image. Here, the tracking of the subject can be realized by a known subject detection technique based on image analysis.
When the set area Ar for tracking the moving object is the recording area Ap, the image of the moving object can be recorded.
Hereinafter, the set area Ar for tracking the moving object is referred to as a moving area Am. The moving region Am is not limited to a region that moves following a moving object, and may be a region that moves at a constant speed or an irregular speed in a predetermined direction.
[ image display procedure ]
A process flow (hereinafter, a video display flow) executed by the control unit 46 of the imaging apparatus 10 will be described with reference to fig. 10.
When the user turns on the power of the image capturing apparatus 10, the image display flow is started. When the image display flow is started, the user performs initial adjustment of the image capturing apparatus 10. In the initial adjustment, for example, a recording method is designated to adjust the resolution (number of pixels) and aspect ratio of a recorded image. This adjustment is performed, for example, at the time of initial startup of the imaging device 10, and the adjustment content is retained thereafter.
The adjustment may be performed by the device manufacturer before shipment of the imaging device 10. In addition, in order to modify the resolution, the aspect ratio, and the like that have been adjusted once, the user can readjust these values after the start-up of the photographing apparatus 10.
In the video display flow, first, the control unit 46 starts the photographing process (S001). The photographing step is a step of photographing a reference image in a reference region at that time point. As the shooting starts, it is preferable that the live reference image be displayed as a live view image on the display 28 or the like on the back side. The photographing process continues until the user instructs the end of photographing or until the power of the photographing apparatus 10 is turned off.
After that, the control unit 46 performs a setting step (S002). The setting step is a step of automatically setting a plurality of setting areas Ar in the imaging area of the reference image. In the setting step, the initial position and the initial size of each setting area Ar are set, and thereafter the position and the size of each setting area Ar can be automatically changed based on a user operation or according to a predetermined rule.
After the setting step is completed, the control unit 46 performs a selection step (S003). The selection step is a step of selecting the recording area Ap from the plurality of setting areas Ar set in the setting step, based on a selection operation by the user.
When the selection step is performed, the control unit 46 may display images of the plurality of setting areas Ar (hereinafter referred to as "setting images") on the display. In this case, the user can confirm the setting images for each of the plurality of setting areas Ar.
After that, when the user instructs to start recording (S004), the controller 46 starts the recording step (S005). The recording step is a step of recording a recorded video image, which is a video image of the selected recording area Ap, on a recording medium. The recorded video is recorded in a predetermined recording format (moving picture format).
As described above, the reference area is determined at the time of receiving a recording start instruction that triggers the start of the recording process.
During the recording step, the control unit 46 performs a display step (S006). The display step is a step of displaying the plurality of setting images including the recording image, the reference image at the time point, and the mark f in the reference image on the display. The user can confirm the position of each setting area Ar in the reference image by observing these images, and grasp which setting area Ar is the recording area Ap.
While the recording step is being performed, the control unit 46 performs a control step (S007). The control step is a step of executing a control process related to the change of the reference area determined at the reception time of the recording start instruction. In the present embodiment, the control processing includes processing for controlling the change of the reference region, and is control processing relating to zooming (optical zooming) by the optical device 19, for example.
When the user instructs to newly select the recording area Ap during the recording step (S008), the controller 46 performs the switching step (S009). The switching step is a step of, after the selection step is performed, reselecting the recording area Ap from the plurality of setting areas Ar in accordance with a reselection instruction by the user and switching the recording area Ap. When the switching step is performed, the video image (recording video image) of the recording area after the switching is recorded in the subsequent recording step.
Then, until the user instructs the end of the video recording, a series of steps (i.e., S005 to S009) from the recording step are repeatedly performed (S010). On the other hand, if there is an instruction to end the video recording, the recording step is ended at that point.
Then, when there is an instruction to end shooting or the power of the imaging device 10 is turned off (S011), the video display flow ends at that point.
[ setting step ]
The setting process in the image display flow will be explained.
In the present embodiment, when the setting process is executed, the display 28 on the back side displays a User Interface (UI) screen 101 shown in fig. 11. The user selects either one of the "recording pattern" and the "automatic setting pattern" as the setting pattern of the setting area Ar, and presses the buttons Bt1 and Bt2 on the UI screen 101, which indicate the selected pattern.
In the setting step when selecting the recording pattern, the control unit 46 reads the positional information stored in advance in the setting area Ar of the imaging device 10. Then, the setting area Ar is automatically set at the position indicated by the read position information. In this case, it is more preferable to highlight the image in the mark f in the reference image for the purpose of making it easy for the user to observe the mark f. For example, in the reference image, brightness other than the image within the flag f (i.e., within the boundary of the set area Ar) or gray scale may be reduced (see fig. 18).
In the present embodiment, when the recording pattern is selected, as shown in fig. 11, the 2 nd UI screen 102 is displayed on the display 28 on the back side. The user selects any one of a plurality of recording patterns prepared in advance, and presses a button representing the selected pattern among the buttons Bt3, Bt4, Bt5 on the 2 nd UI screen 102. As the recording pattern, for example, "last use pattern", "preset pattern", and "user input pattern" can be selected.
On the other hand, in the setting step when the automatic setting pattern is selected, the control unit 46 automatically sets the setting area Ar at a predetermined position in the reference image based on the reference image during the shooting.
In the present embodiment, when the auto setting pattern is selected, a 3 rd UI screen 103 is displayed as shown in fig. 11. The user selects any one of a plurality of automatic setting patterns prepared in advance, and presses a button representing the selected pattern among the buttons Bt6, Bt7, Bt8 on the 3 rd UI screen 103. As the automatic setting pattern, for example, "scene recognition pattern", "object detection pattern", and "AF position determination pattern" can be selected. In the case of automatically setting the pattern, it is preferable that the setting areas Ar do not overlap each other at the start of the setting step, and the flag f is displayed in a state where the setting areas Ar do not overlap each other. This makes it easy to recognize the plurality of flags f, and to set the setting area Ar in the setting step.
Hereinafter, a plurality of recording patterns and a plurality of automatic setting patterns will be described.
(last use pattern)
When the previous usage pattern is selected, the control unit 46 reads the position information of the setting area Ar set in the setting step in the previous (immediately preceding) image display flow. The position information indicates the positions of the plurality of setting areas Ar set in the setting step of the previous image display flow, and is stored in, for example, the internal memory 50 of the imaging apparatus 10. The stored position information is preferably retained after the power of the photographing apparatus 10 is turned off.
Further, the size information of the setting area Ar may be stored together with the position information of the setting area Ar. In this case, when the last usage pattern is selected, the position information and the size information can be read together.
Then, as shown in fig. 12, the control unit 46 automatically sets the setting area Ar at the position indicated by the read position information, that is, the same position as the setting position in the previous video display flow. Such a last use pattern is effective in a case where a picture is taken with the same composition as the last picture display flow (for example, a case where a picture is taken at the same position and at the same bend angle).
In the above-described embodiment, the read position information indicates the position of the setting area Ar set in the last (immediately preceding) image display flow, but the present invention is not limited to this. For example, the position information of the setting area Ar set in the setting process performed several times in the past may be stored. In this case, the setting area Ar of this time may be set at the position of the setting area Ar set in the setting process of one time designated by the user in the stored position information.
(Preset Pattern)
When the predetermined pattern is selected, the control unit 46 reads the positional information of the setting area Ar arranged in the predetermined regular arrangement pattern. The regular arrangement pattern is registered in advance in the imaging device 10 or newly registered when the purchased program is updated.
When the positional information relating to the regular arrangement pattern (preset pattern) is read, the 4 th UI screen 104 shown in fig. 13 is displayed on the display 28 on the back side. On the 4 th UI screen 104, a plurality of types of patterns (patterns a to D in fig. 13) are displayed so as to be selectable, and the user selects any one of the preset patterns. The control section 46 automatically sets the setting area Ar at a position corresponding to the preset pattern selected by the user. When a picture is taken at a determined composition (for example, when a picture is taken at a specific position), if a pattern corresponding to the composition exists, such a preset pattern is effective.
The number of types of the preset patterns and the number and arrangement positions of the setting regions Ar in each preset pattern are not limited to these, and may be determined arbitrarily.
(user input pattern)
When the user input pattern is selected, the control unit 46 reads the position information of the setting area Ar determined based on the user operation. The user operation is an input operation for registering the position of the setting area Ar, and is performed in a stage prior to the setting step (for example, in a stage of initial registration relating to the pattern). The above-described input operation is performed, for example, on the input screen 105 for pattern registration shown in fig. 14. The user touches and drags the area inside the area setting flag fx displayed on the input screen 105. Thereby, the display position of the dragged flag fx is registered as the position of the setting area Ar.
The control unit 46 automatically sets the setting area Ar at the position indicated by the read position information, that is, at the setting position determined based on the input operation by the user. Such a user input pattern is effective when a pattern is prepared based on an input of a user in a situation where the preset pattern cannot be associated.
In addition, the size of the setting area Ar may be registered together at the position where the setting area Ar is registered. That is, information indicating the position and size of the setting area Ar determined based on the input operation by the user may be stored. In this case, the plurality of setting areas Ar are set at positions and sizes registered as user input patterns, respectively.
(scene recognition pattern)
In the setting step when the scene recognition pattern is selected, the control unit 46 recognizes the photographed scene of the reference image based on the reference image. The shooting scene of the reference image is, for example, a scene shot as the reference image, an event, the type of the subject, a scene (for example, a daytime scene or a night scene), a situation (for example, weather, or the like), and the like.
As a method of recognizing a captured scene based on a reference video, a known scene recognition technique can be used. For example, a method of determining whether or not the brightness or color tone of the image in a predetermined region is within a predetermined range may be mentioned. Further, machine learning using the correct label indicating the photographed scene of the image and the data of the image as a learning data set can be performed, and the photographed scene of the reference image can be recognized by the scene recognition model obtained as a result of the learning.
(object detection Pattern)
In the setting step when selecting the object detection pattern, the control unit 46 detects the object in the reference image, and automatically sets the setting area Ar at the position of the detected object. The object in the reference image is an object other than the scenery in the reference image, and is an object that can be detected by the control unit 46, and includes, for example, a person, an animal, a vehicle, and the like.
When detecting a subject, a known template matching technique can be applied. For example, an image of a subject may be stored as a template image in advance, the template image may be collated with each portion in a reference image, and an image similar to the template image may be detected as the subject. Alternatively, a model that recognizes a subject in a picture by machine learning may be constructed, and the subject in the reference picture may be detected using the model. As the algorithm of the object recognition model, a known algorithm such as YOLO (you Only Look one) or R-CNN (region with constraint Neural network) can be used.
In the object detection pattern, as shown in fig. 6, the control section 46 can set the set region Ar for each detected object. In this case, the entire corresponding subject may be accommodated in each setting region Ar, or a part of the corresponding subject (for example, an upper body part including a face) may be accommodated.
According to the subject detection pattern as described above, the setting area Ar can be automatically set for the subject in the reference image without the user taking care. In the present embodiment, when recording a video of an object in a reference video in order to record a video of a recording area Ap selected from a plurality of setting areas Ar, an object detection map is effective.
(AF position determining pattern)
In the setting step when the AF position determination pattern is selected, the control unit 46 determines a position in the reference image where the feature of the imaging device 10 for determining focusing satisfies the reference, and automatically sets the setting area Ar at the determined position. The feature for determining focusing is, for example, a feature for determining a focusing position in the reference image. As an example, when the autofocus is the phase difference af (auto focus) method, the phase difference corresponds to the above-described feature. When the autofocus is the contrast AF method, the contrast between a plurality of pixels corresponds to the above-described feature. The AF method may be an AF method using directional light or an AF method using DFD (Depth from defocus).
The position satisfying the reference is a position where the above-described feature (for example, phase difference, contrast, or the like) is a feature in focus. In short, the control unit 46 specifies the position focused by the AF function as the position satisfying the reference in the reference picture. Then, as shown in fig. 15, the setting area Ar is automatically set with respect to the determined position (position in focus, hatched position in fig. 15). In this case, for example, in the reference image, the size of the setting region Ar may be automatically set according to the area of the portion in focus or the like.
In general, a video of a portion focused by the AF function is a recording target in many cases. Based on this, by setting the setting area Ar at the position of focus, it is possible to easily record an image in focus, and thus to improve convenience for the user.
In the AF position determination pattern, if a setting area Ar is set at the focused position, the control portion 46 automatically sets another setting area Ar along with this. For example, as shown in fig. 15, another setting area Ar is set at a symmetrical position of the focused position (for example, a position symmetrical when viewed from the center of the reference image). Thus, when the AF position determination pattern is selected, a plurality of setting areas Ar can be automatically set in the reference area. However, the setting of the position of the other setting area Ar is not limited to the above-described position.
In the AF position determination pattern, when the entire reference image is focused, the entire reference area may be set as 1 set area Ar.
In the case where the subject is not detected in the reference image and an appropriate setting position of the setting area Ar is not observed in the reference image, the setting area Ar may be set in the vicinity of the center of the reference image.
In the present embodiment, an in-focus position in a reference image is given as an example of imaging conditions, and the set area Ar is set in the reference image at a position (i.e., a focused position) where a feature for determining the in-focus position satisfies a reference. Here, the imaging condition may be a condition other than the in-focus position, and examples thereof include an exposure amount and a white balance. In the imaging conditions mentioned here, the shutter speed and ISO sensitivity are excluded.
Here, taking an exposure amount as one of the imaging conditions as an example, in the reference image, the setting region Ar can be set at a position where a characteristic (for example, a pixel value or luminance) for determining the exposure amount satisfies a reference. For example, in the reference image, the setting area Ar may be set at a position where the exposure amount is adjusted to an appropriate value by an ae (automatic exposure) function. Alternatively, in the reference image, the setting region Ar may be set at a position where the exposure amount is relatively low and a position where the exposure amount is relatively high, respectively.
Further, taking a white balance as one of the imaging conditions as an example, in the reference image, the setting region Ar may be set at a position where a characteristic (for example, chromaticity) for determining the white balance satisfies the reference. For example, in the reference image, the setting area Ar may be set at a position where the White balance is adjusted to an appropriate value by the awb (auto White balance) function. Specifically, when there is a region in the reference image to which light is irradiated from each of a plurality of light sources such as an illuminator and the sun, the setting region Ar may be set for each irradiation region of the light sources.
The user selects an arbitrary pattern from the plurality of arrangement patterns described above. Thus, in the setting step, the setting area Ar can be set reflecting the user's mind or preference. In the setting step, since the setting area Ar is automatically set, the user does not need to take a lot of trouble (setting work). That is, the display method of the present embodiment improves convenience (usability) from the viewpoint of setting the setting area Ar in the reference area while saving the user's trouble and reflecting the user's mind and the like. The arrangement pattern may be a combination of a plurality of arrangement patterns shown in fig. 11. For example, a plurality of setting areas Ar may be set using the scene recognition pattern and the AF position determination pattern at the same time.
The above-described pattern is merely an example, and an arrangement pattern other than the above-described pattern may be added, or a part of the above-described pattern may be omitted.
In the setting step of the present embodiment, when a feature portion, which will be described later, exists in the reference image regardless of the type of the selected pattern, the candidate Ac of the setting region is automatically set at the position of the feature portion in the reference region. Then, when the user performs a predetermined operation on the set candidate Ac, the setting area Ar is set at the position of the set candidate Ac.
Hereinafter, a procedure of setting the candidate Ac in the setting step will be described.
(setting of candidates)
In the setting step, the control unit 46 determines whether or not a characteristic portion exists in the reference image based on the characteristic (for example, pixel value) of each pixel of the reference image. The characteristic portion is a portion that visually changes in the reference image (excluding a portion detected as a subject), and for example, a portion that moves and a portion that changes in color, brightness, and the like exist in the background image. Specific examples thereof include a portion where a flow such as a waterfall exists, a portion where a light is turned on or off such as a light, and a portion where characters, images, and the like displayed such as an electronic bulletin board are switched.
In the reference image shown in fig. 16, for example, the illumination appearing in the upper left portion in the figure corresponds to a characteristic portion.
When the characteristic portion exists in the reference image, the control unit 46 performs the 1 st setting step. The 1 st setting step is a step of automatically setting the candidate Ac of the setting region at the position of the feature in the reference image.
When the 1 st setting step is performed, a mark (hereinafter referred to as a candidate mark K) indicating the position of the candidate Ac in the reference image is displayed in a display step performed later as shown in fig. 16. The candidate mark K is displayed differently from the mark f indicating the position of the setting area Ar. For example, the candidate mark K in fig. 16 is a substantially L-shaped graphic object representing a quadrangle of a rectangular region surrounding the feature portion. The type, shape, and the like of the candidate mark K are not particularly limited, and may be, for example, an arrow, a finger-shaped pointer, or a graphic object made of a symbol such as a circle or a star. Alternatively, the candidate flag K may be a difference in gray level (difference in brightness) between the candidate Ac in the reference image and its surroundings.
When the user performs an operation such as clicking the candidate mark K during the execution of the display process, the 2 nd setting process is executed using this as a trigger. The 2 nd setting step is a step in which the control unit 46 sets the setting region Ar at the setting position of the candidate Ac for which the operation is performed, in accordance with the operation (e.g., click operation or the like) of the candidate Ac by the user. By performing the 2 nd setting step, as shown in fig. 17, a new setting area Ar is set in the area of the image of the feature portion.
As described above, in the present embodiment, the candidate Ac of the set region can be automatically set to the characteristic portion that is not recognized as the subject but visually changes in the reference image. Further, by displaying the candidate mark K indicating the position of the set candidate Ac in the reference image, the user can easily confirm the candidate Ac. Then, the user performs a predetermined operation on the candidate Ac to set the setting region Ar at the position of the candidate Ac. Thus, the setting area Ar can be set for the characteristic portion of interest to the user in the reference image. When the set area Ar set in this manner is selected as the recording area, the image of the characteristic portion can be recorded.
[ display Process ]
Hereinafter, the display process in the image display flow will be described in detail.
When the setting area Ar is set in the setting step, the control unit 46 starts the display step. After that, the control unit 46 continues the display process until the recording process is completed (strictly speaking, until the imaging is completed).
The display step is a step of displaying the recording image, the reference image, and the mark f indicating the position of each of the plurality of setting areas Ar in the reference image on the display. In the display step, as shown in fig. 18, the mark f is displayed for each setting area Ar in the reference image. This enables the user to reliably recognize the position of each setting area Ar in the reference image.
In the display step, as shown in fig. 18, images (hereinafter, referred to as standby images) of the setting areas Ar other than the recording area Ap among the plurality of setting areas Ar can be displayed.
In the display step when the characteristic portion exists in the reference image, the candidate mark K indicating the position of the characteristic portion is displayed in the reference image together with the mark f.
An example of the display process will be described, in which the recorded image, the reference image, and the standby image can be simultaneously displayed on a screen (hereinafter, referred to as a display screen) of the display 28 on the back side. For example, as shown in fig. 18, the display screen is divided into a main screen G1, a sub screen G2, and an information display screen G3. In this case, the standby video can be displayed on the sub-screen G2.
The main screen G1 occupies a relatively wide area (for example, more than half of the display screen) on the display screen, and displays a reference video. The aspect ratio of the main picture G1 is preferably set to the same value as the aspect ratio of the reference picture.
The sub-picture G2 is a picture smaller than the main picture G1 and is aligned in a line in the vertical or horizontal direction of the display screen. Each sub-screen G2 displays a recorded image or a standby image. The aspect ratio of each sub-screen G2 is preferably set to the same value as the aspect ratio of each of the recording video and the standby video.
The information display screen G3 is a screen on which text information other than video is displayed, and for example, as shown in fig. 18, suggestions related to video recording are displayed.
The layout of the display screen (for example, the size and arrangement position of each of the main screen G1 and the sub screen G2) is not limited to the embodiment shown in fig. 18, and can be freely designed according to the specifications of the apparatus, the preference of the user, and the like.
In the main screen G1, the flag f is displayed for each of the plurality of setting areas Ar set in the setting step together with the reference image. In the display step, the number of the marks f displayed on the main screen G1 (the display number) is variable, and the maximum number of the marks f that can be displayed (the display upper limit number) can be simultaneously displayed.
Further, for the purpose of making it easy for the user to view the mark f, it is more preferable to highlight the image within the mark f in the reference image. For example, as shown in fig. 18, in the reference image, the lightness or the gray level of the image other than the image within the flag f (i.e., within the boundary of the set area Ar) can be reduced.
In the display step, a mark f indicating the position of the recording area Ap and a mark f indicating the position of the other setting area Ar (hereinafter also referred to as a standby area) are displayed in different manners. In this case, the mark f indicating the position of the recording area Ap is preferably highlighted, and for example, the color of the frame line may be a relatively conspicuous color such as red or fluorescent color, or the frame line may be made thicker or may be blinked. Further, the image (i.e., the recorded image) itself in the recording area Ap can be highlighted.
When a moving object is present in the reference image, the movement region Am set for the object moves to follow the object. In the display step, as shown in fig. 9, the mark f indicating the position of the movement area Am is displayed differently from the mark f indicating the position of the setting area Ar other than the movement area Am. This makes it possible to easily distinguish the moving area Am from the other set areas Ar and to easily observe the position of the moving area Am in the reference image.
When the object in the moving region Am moves in the depth direction (i.e., in the direction of moving closer to and away from the imaging device 10), the size of the object in the reference image changes. Accordingly, the ratio of the size of the subject to the size of the moving region Am (hereinafter referred to as a subject ratio) changes. In this case, the control section 46 changes the size of the moving area Am in accordance with the change in the object ratio by the electronic zoom function, and adjusts the object ratio to be constant.
In addition, the moving area Am may move to the vicinity of the end of the reference area (i.e., the outer edge of the angle of view) or may exceed the reference area as the object moves. In this case, the control unit 46 preferably displays a warning message or outputs a warning sound on the information display screen G3 to notify the user of the above situation.
When the reference image (i.e., the extracted image extracted from the image of the total field angle) is changed by the electronic camera shake correction, the relative position of each set area Ar with respect to the reference area changes (see fig. 4). In this case, the control unit 46 calculates the amount of deviation of the reference image based on the electronic camera shake correction, and adjusts the position of each setting area Ar based on the calculated amount of deviation. Thus, the position of each setting area Ar with respect to the reference area is maintained before and after the electronic camera shake correction is performed.
Further, it is preferable that the on/off state be freely switched in accordance with the operation of the user for the function of adjusting the position of the setting area Ar in association with the electronic camera shake correction.
In the display step, as shown in fig. 18, the identification information of each of the marks f is displayed for each of the marks f. The identification information of each mark f is fixed during the execution of the display step, and is maintained unchanged before and after the display step even if the number of displayed marks f changes in the display step. For example, in fig. 18, when the display of the mark f having the identification number "3" is suspended, the identification numbers (i.e., "1", "2", and "4") of the other marks f are not changed. In this way, by fixing the identification numbers of the respective flags f during the execution of the display step, it is possible to easily grasp where the setting region Ar exists even if the number of displayed flags f is changed, for example.
On the sub-screens G2, a plurality of setting images including a recording image and a standby image are displayed. Specifically, the corresponding setting images are displayed one by one on each of the sub-screens G2. Here, if the aspect ratio of each sub-screen G2 is the same as the aspect ratio of the corresponding setting image, the setting image can be displayed while being satisfactorily stored in the screen.
Further, among the plurality of setting images displayed, it is preferable to highlight the recorded image. For example, it is preferable that the sub-screen G2 displaying the recorded video and the sub-screen G2 displaying the other setting video differ in the color of the screen frame, the thickness or type of the frame line, the presence or absence of lighting, and the like. Thus, a user who observes a plurality of setting images through the display can easily distinguish which image is a recorded image.
As shown in fig. 18, information indicating the video in the moving area Am (for example, character information such as "moving" or the like) may be displayed on the sub-screen G2 (second sub-screen from the top in fig. 18) on which the video in the moving area Am is displayed.
In this embodiment, a plurality of execution modes of the display process are prepared, and the display method of the video in the display process is changed for each mode. That is, the display step is performed in accordance with a mode designated by the user.
For example, when the user designates the 1 st mode, the 1 st display step is performed as the display step. As shown in fig. 18, in the 1 st display step, the reference image and the flag f for each setting area Ar are displayed on the main screen G1, and the recording image and the standby image are displayed on the sub screen G2. On the other hand, when the user designates the 2 nd mode, the 2 nd display step is performed as the display step. As shown in fig. 19, in the 2 nd display step, the recording image and the standby image are displayed, and the reference image and the flag f for each setting area Ar are not displayed.
In the 1 st display step, the recording image, the standby image, the reference image, and the mark f for each setting area Ar do not need to be displayed on the same display (for example, the display 28 on the back surface). That is, as shown in fig. 20, the video displayed on the main screen G1 and the video displayed on the sub screen G2 may be displayed on different displays. As an example, the image displayed on the main screen G1 may be displayed on the display 28 on the rear side, and the image displayed on the sub screen G2 may be displayed on the electronic viewfinder 29 or an external display (not shown) connected to the imaging apparatus 10.
As described above, in the present embodiment, the display step includes the 1 st display step and the 2 nd display step, and in the 2 display steps, one of the steps designated by the user is performed. Therefore, the display images can be switched according to the requirements of the user. For example, when the user wants to confirm the reference image and the mark f, the 1 st display step may be performed to display the reference image and the mark f. On the other hand, when the user wants to confirm only the recording image and the standby image (i.e., the setting image), the 2 nd display step may be performed to display only the recording image and the standby image.
In addition, it is preferable that the mode of execution of the display step, that is, which of the 1 st display step and the 2 nd display step is executed, be freely switched in accordance with the switching operation by the user during the execution of the display step. For example, it is preferable that the user switches which display step is to be used by the touch panel 36 or the operation buttons 30 shown in fig. 2.
In the 2 nd display step, the recorded image and the other setting image may be displayed on the same display. For example, as shown in fig. 19, the recorded video may be displayed on the main screen G1 and the standby video may be displayed on the sub screen G2.
Alternatively, in the 2 nd display step, the recorded video and the other setting video may be displayed on different displays. For example, as shown in fig. 20, a recorded image may be displayed on a display 28 on the back side, and another set image may be displayed on an electronic viewfinder 29 or an external display (not shown) connected to the imaging device 10.
The information display screen G3 displays, for example, offer information described later.
Further, on the information display screen G3, advice, warning message, and the like for photographing the user can be displayed. For example, when the exposure amount, ISO sensitivity, or the like needs to be adjusted in the recording process, the message may be displayed.
Further, information other than the above can be displayed on the information display screen G3. For example, the shooting mode used at the current time, current values of shooting conditions such as resolution and ISO sensitivity, and on/off states of various functions such as electronic zoom may be displayed.
(display of setting image based on priority)
In the display step, the number of the sub-screens G2 is determined based on at least one item of the resolution and the aspect ratio of the recorded video (hereinafter, simply referred to as "at least one item"). That is, the number of display of the setting images displayed on the sub-screen G2 is determined according to at least one item.
Similarly, the size of each sub-screen G2 is also determined according to at least one item. That is, the display size of the video is set to a size corresponding to at least one of the items.
Here, at least one of the items is adjusted according to a recording format (animation format) of the recorded video. Therefore, the number of images to be displayed (strictly speaking, the maximum number of images to be displayed) and the display size of the images to be displayed in the display step are set to the number and size corresponding to the recording format of the recorded images.
In the present embodiment, the number of display images and the display size of the setting images are determined according to at least one item, and thus each of a plurality of setting images including the recording image can be displayed favorably. For example, each setting image can be displayed on the sub-screen G2 with a suitable resolution.
In the present embodiment, the aspect ratio of the recording video is determined according to the recording format of the recording video, but the aspect ratio of the recording video may be arbitrarily set without depending on the recording format. In this case, the number and size of the sub-pictures G2 change according to the adjusted aspect ratio as the aspect ratio of the recorded video is adjusted.
In the present embodiment, both the display number and the display size of the setting images displayed in the display step (that is, the number and size of the sub-screens G2) are determined based on at least one item. However, the present invention is not limited to this, and any one of the display number and the display size of the set video may be determined according to at least one item.
The number of images to be displayed in the display step may be determined based on at least one of the items. For example, the user can set or change the number of displays.
In the present embodiment, the number of displayed images to be set in the display step is determined in accordance with the resolution and aspect ratio of the recorded images, as described above. In the case of the 1 st display step, 1 of the displayed setting images is a recording image, and the rest is a standby image. Therefore, the set value relating to the number of standby videos to be displayed, that is, the maximum number of displays, is determined according to the resolution and aspect ratio of the recorded videos, specifically, the number is 1 less than the number of sub-screens G2.
In the case shown in fig. 21, the maximum number of standby videos to be displayed is 3.
On the other hand, the number of the setting areas Ar in the reference area changes depending on the subject in the reference image, and accordingly, the number of the setting areas Ar other than the recording area Ap, that is, the plurality of standby areas changes.
In addition, in the case shown in fig. 21, the number of standby areas is 6.
When the number of the plurality of standby areas reaches (strictly speaking, exceeds) the maximum number of standby images to be displayed, the control unit 46 performs the display process in the flow shown in fig. 22. The flow of the display process shown in fig. 22 will be described, and when the number of the plurality of standby areas reaches the maximum display number of the standby images, the control unit 46 performs the determination process (S021).
In the determination step, the control unit 46 determines the priority for each of the plurality of standby areas. To explain this step, first, the control unit 46 detects an object in the standby image for each of the plurality of standby images (S101). Next, the control unit 46 acquires information on the detected object in the standby image (S102). In this case, the control unit 46 may acquire performance information as a recording area to be described later, instead of information on the object in the standby image.
Examples of the information on the object include the time or the number of times the object is reflected on the reference image during the shooting, and the size (display size) of the object in the reference image. These pieces of information can be acquired by analyzing the reference image and measuring or counting the reference image by the control unit 46 during the shooting period.
When the subject is a person, the face of the person can be identified by a technique such as pattern matching, the importance of the person whose face is identified is determined, and the determination result is used as information related to the subject. The importance of a person can be determined by, for example, determining the attribute (for example, family, relative, acquaintance, or other person) of the person whose face is recognized, and obtaining the importance from the determination result. When there are a plurality of detected subjects, the priority may be determined by the type of subject (for example, a person, an animal, an article, a landscape, or the like). Further, the priority may be determined preferentially from the object whose detected timing is early.
It is needless to say that information other than the above information may be considered as information related to the object. In the determination step, it is desirable for the user to freely select which information is used as the information on the object.
Thereafter, in the determination step, the control unit 46 determines the priorities of the plurality of standby areas based on the information on the detected object (S103).
After the determination step, the control unit 46 selects N standby areas among the plurality of standby areas based on the priorities (S022). Here, the number N of the selected standby areas is the maximum number of standby videos to be displayed, and is determined according to the number of the sub-screens G2 (for example, the number is 1 less than the number of the sub-screens G2). In step S022, the control unit 46 selects N standby areas in order from the higher priority side.
Then, the control unit 46 displays the images (standby images) of the selected N standby areas on the sub-screen G2 (S023). Here, when the sub-screen G2 is arranged in the vertical direction of the display screen as shown in fig. 21, it is desirable that the standby image in the standby area with higher priority be displayed on the upper sub-screen G2.
As described above, in the present embodiment, when the number of the plurality of standby areas reaches (exceeds) the maximum display number of standby videos, N standby videos selected according to the priority are displayed. This makes it possible to appropriately display a standby video image having a high priority in response to a restriction on screen display.
The method of determining the priority is not limited to the method of determining the priority based on the information on the object in the standby video. For example, the priority may be determined based on actual performance information as a recording area. More specifically, the number of times the recording area is selected from the time when the shooting is started to the current time point, the time when the recording area is present, the elapsed time from the time when the recording area is selected immediately before, or the like is determined for each of the plurality of standby areas. These pieces of information correspond to pieces of information related to performance that has been selected as the recording area in the past.
Then, the priorities of the plurality of standby areas may be determined based on the information on the performance. For example, when the information on the actual performance is the number of times or the time of being selected as the recording area, the higher the number of times or the time, the higher the priority may be given.
In the flow of the display step shown in fig. 22, N standby areas are selected based on the priority, and only the video image of the selected standby area is displayed. For example, the upper N standby areas and the lower standby areas may be selected in descending order of priority. In this case, the standby video of the upper N standby areas may be displayed first, and the standby video of the lower standby area may be displayed by scrolling or switching the display screen, as shown in fig. 23.
Further, each of the plurality of standby images may be displayed in a size corresponding to the priority. For example, as shown in fig. 24, the standby video in the standby area with a high priority may be displayed on the sub-screen G2 with the normal size, and the standby video in the standby area with a low priority may be displayed on the sub-screen G2x with the reduced size. In this way, it is possible to display each of the plurality of standby videos and to display the standby area having a high priority more easily.
[ control procedure ]
A control procedure in the image display flow will be described. The control step is a step of executing a control process for controlling the change of the reference region, and is performed at least after the reference region is determined. That is, the control process is a process for controlling the change of the reference area after the time point when the user's recording start instruction is received.
In the control process, the control section 46 locks the zoom driving section 21 and stops the movement of the optical device 18 for zooming. Thus, in the recording step, the optical zoom function is restricted, the reference region is fixed, and the angle of view at that point is maintained.
By executing the control processing described above, it is possible to prevent the relative position of each set area Ar with respect to the reference area from changing during the execution of the recording step. As a result, it is possible to avoid a situation in which each setting image such as a recording image cannot be appropriately captured because each setting area Ar is displaced from the reference area.
The control process to be executed to suppress the change of the reference area is not limited to the above-described process, and may be, for example, a process for notifying offer information related to the change of the reference area. As an example, a message to prohibit optical zooming may be displayed on the information display screen G3 or reproduced by sound or the like. In addition, as shown in fig. 18, a message for prohibiting an operation (for example, a panning operation) of moving the photographing apparatus 10, a message for recommending a use of a pan head such as a tripod to fix the photographing apparatus 10, or the like is displayed.
The control step may be performed at a point before the reference area is determined (that is, before the recording start instruction is received), or may be performed in the setting step, for example. In this case, it is possible to suppress the change of the reference area while the plurality of setting areas Ar are set in the reference area.
[ changing step ]
When a predetermined condition is satisfied during the execution of the recording step, the control section 46 executes the changing step. The changing step is a step of performing a changing process related to a change in the number of images (i.e., setting images) displayed in the setting area Ar. By executing the change processing, the number of displayed setting images can be dynamically increased or decreased during the execution of the recording step.
The flow of processing during the execution of the recording step will be described below with reference to the flow shown in fig. 25. That is, in fig. 26 to 31 referred to in the following description, for convenience of illustration, the gray scale of the image other than the image in the mark f is omitted.
During the execution of the recording step, the control unit 46 monitors the reference video and determines whether or not the number of subjects in the reference video is increased (S031). When the number of objects in the reference image increases, the controller 46 performs a changing step (S032). In the changing step, a changing process is performed to increase the number of displayed setting images.
In the changing process in step S032, the setting area Ar is added in accordance with the increase of the object in the reference image, and the setting image of the added setting area Ar is displayed on a new display. As a result, the number of display images to be set increases. For example, as shown in fig. 26 and 27, when the number of subjects in the reference picture is increased from 3 to 4, the number of displayed setting pictures (i.e., the number of pictures displayed in the sub-picture G2) is increased from 3 to 4.
When the set area Ar is added in association with an increase in the number of subjects in the reference image, the number of flags f in the reference image displayed on the main screen G1 is increased by the same amount as the amount of the added set area Ar (see fig. 27).
In the above-described embodiment, the setting region Ar is set at the position of the new object in the reference image, but the present invention is not limited to this. For example, the candidate Ac of the set area may be set instead of the set area Ar. In this case, a marker (more specifically, the candidate marker K) indicating the position of the candidate Ac is displayed on the reference image. Then, when the user performs an operation such as clicking on the candidate flag K, the setting region Ar is added to the position of the candidate Ac as a trigger. As a result, the setting image of the additional setting area Ar is newly displayed, and the number of display of the setting images increases.
Also, instead of setting the setting area Ar or its candidate Ac at the position of the new object in the reference image, offer information related to addition of the setting area Ar (i.e., change in the number of displayed setting images) may be output. For example, the setting area Ar may be added at the position of a new subject, and a message proposing an increase in the number of displayed setting videos may be displayed on the information display screen G3 or reproduced in sound or the like. In this way, as the change processing, processing of outputting the offer information related to the change of the display number of the setting video can be executed.
In the flow shown in fig. 25, the control unit 46 monitors the reference image during the execution of the recording step, and determines whether or not the moving area Am in the reference area overlaps with another set area Ar (S033). Then, as shown in fig. 28, when the moving area Am overlaps with another set area Ar, or when the moving area Am overlaps with another set area Ar immediately before, the control section 46 outputs a warning message or a warning sound (S034). This is because, in a situation where the moving area Am overlaps with the other setting area Ar, there is a case where an object in one area appears behind an object in the other area and imaging cannot be performed.
By outputting a warning message or the like, the user can be informed of the state where the moving area Am overlaps or is immediately before overlapping with the other set area Ar. The user who receives the warning can take appropriate measures (for example, resetting the setting area Ar, etc.). As one of the measures, when the user makes a change request by a screen operation or the like (S035), the control unit 46 performs a change process (S036). In the changing step, a changing process is performed to reduce the number of displayed setting images.
In the changing process in step S036, the overlapped moving area Am and other setting areas Ar are updated to 1 setting area Ar, and the setting image of the updated setting area Ar is displayed on the display. For example, as shown in fig. 29, the image of the moving area Am and the image of the other set area Ar, which are displayed separately as shown in fig. 28, are combined into 1 image including the subject of each of the 2 images. As a result, the number of setting regions Ar is reduced by 1, and the number of display of setting images is also reduced by 1.
It is preferable that the size and position of the updated setting area Ar are set to accommodate the subject of the 2 images. The identification information of the flag f indicating the updated set area Ar may be continued with the identification information of the flag f indicating one of the 2 areas before the update (that is, the moving area Am and the other set areas Ar). Alternatively, a new identification number may be assigned to the updated setting area Ar.
In the flow shown in fig. 25, the control unit 46 periodically determines whether or not each of the plurality of setting areas Ar satisfies the suspension condition during the execution of the recording step (S037). The termination condition is a criterion for determining whether or not display of the image (setting image) in the setting area Ar should be terminated, and examples thereof include the following conditions (1) to (3).
Suspension condition (1): at least a part of the set area Ar exceeds the reference area due to optical zooming or the like.
Suspension condition (2): for a certain time or longer, there is no change in the setting image of the setting area Ar or no object in the setting image.
Suspension condition (3): the set area Ar is not selected as the recording area for a predetermined time or longer.
The suspension condition is not limited to the above conditions (1) to (3), and conditions other than the above conditions may be included in the suspension condition.
Then, when there is a setting area Ar satisfying any of the above-described suspension conditions (1) to (3), the control unit 46 performs a changing step (S038). In the changing step, a changing process is performed to reduce the number of displayed setting images.
In the change processing in step S038, the display of the setting image in the setting area Ar satisfying the termination condition is terminated. For example, in fig. 26, when the setting area Ar in which the identification information of the flag f is "2" satisfies the termination condition, the display of the setting image in the setting area Ar is terminated (the display disappears from the sub-screen G2) as shown in fig. 30. As a result, the number of displayed setting images is reduced by 1.
In addition, as shown in fig. 30, in the setting area Ar (hereinafter, referred to as a pause area.) in which the display of the setting image is paused, the display of the flag f indicating the position of the pause area is also paused on the main screen G1.
That is, the suggestion information regarding the suspension of the display of the setting image (in other words, the change in the number of displayed setting images) may be output without immediately suspending the display of the setting image in the setting area Ar satisfying the suspension condition. For example, a message recommending that the display of the setting image in the setting area Ar satisfying the termination condition be terminated may be displayed on the information display screen G3 or reproduced by sound or the like. In the meantime, another image (an image different from the setting image) may be displayed on the sub-screen G2 instead of the setting image of the setting area Ar satisfying the suspension condition.
When the display of the setting image of the stop image is stopped, the stop area is stored in the internal memory 50 or the like in the imaging device 10 while the recording process is performed. The period of storing the pause area may be until the shooting is completed or until the power of the imaging apparatus 10 is turned off.
When the display of the setting image in the stop area is stopped, an operation chart such as a return button Bt9 is displayed on the main screen G1 as shown in fig. 30. The user can instruct to redisplay the setting video image of the pause area by clicking the return button Bt9, for example. If there is a re-display instruction from the user (S039), the control unit 46 performs a changing process (S040). In the changing step, a changing process of the setting image of the pause area in the redisplay storage is executed based on a redisplay instruction of the user.
By the changing process in step S040, the images in the termination area are displayed again on the sub-screen G2, and the number of display of the setting images is returned to the number before the display of the setting images in the termination area is terminated.
The setting image of the pause area that can be redisplayed is not limited to the image of the immediately preceding pause area (the setting area in which the display of the most recently set image is paused). For example, the images of the pause areas of the past several times during the current imaging period may be displayed again. In this case, the image capturing apparatus 10 stores the pause areas of the past several times, and the user designates which pause area to redisplay the set video and instructs to redisplay the video.
In the flow shown in fig. 25, the control unit 46 monitors the reference video during the execution of the recording step, and determines whether or not the number of subjects in the reference area exceeds the upper limit value of the number of set areas Ar (S041). When the number of objects exceeds the upper limit of the number of setting areas Ar, the control unit 46 performs a changing step (S042). In the changing step, a changing process is performed in association with a reduction in the number of displayed setting images.
In the changing process in step S042, a message indicating that the number of subjects exceeds the upper limit of the number of setting areas Ar and recommending a reduction in the number of setting images is displayed on the information display screen G3 or reproduced by sound or the like. The user modifies the setting of the setting area Ar with this as a trigger, and deletes an unnecessary setting area Ar such as the setting area Ar having the longest period that is not selected as the recording area. As a result, the number of displayed setting images is reduced by the number of the setting areas Ar deleted by the user.
The series of steps S031 to S042 described above are repeatedly executed during the execution of the recording step, and are ended at the end of the recording step (S043).
As described above, in the present embodiment, the changing step is performed in response to a change in the reference image (for example, addition of a subject or the like) or establishment of a suspension condition, and the number of displayed setting images is appropriately changed. This makes it possible for the user to easily check the number of setting images corresponding to the current situation, and to omit checking images in the setting area Ar or the like, which is less likely to become a recording area. As a result, convenience to the user is improved.
< other embodiments >
The embodiments described above are specific examples given for the purpose of explaining the display method of the present invention in a simple and easy manner, and are merely examples, and other embodiments may be considered.
In the above-described embodiment, the imaging device 10 is assumed to display various images as a display device, but the present invention is not limited thereto. For example, the display device may be constituted by the imaging device 10, a camera controller connected to the imaging device 10 by wire or wirelessly, an external display, or the like. In this case, the camera controller functions as the control unit 46, and causes the imaging device 10 to capture a reference image and causes the external display to display various images. The camera controller may set the setting area Ar, select and reselect (switch) the recording area, and record the recorded image.
In the above-described embodiment, the control process in the video display flow is performed to perform a control process for suppressing the change of the reference region. However, the present invention is not limited to this, and a process for urging the change of the reference region may be executed as the control process related to the change of the reference region. In this case, the reference region can be adjusted in accordance with a change in the reference image (for example, movement of the subject in the reference image).
More specifically, for example, in the recording step, the video is recorded by zooming in and zooming out through electronic zooming. In this case, if the zoom magnification exceeds a predetermined value, the resolution of the recorded video becomes a fixed value or less, and it may be difficult to obtain desired image quality. In this case, as shown in fig. 31, the information prompting the change of the reference region may be displayed on the information display screen G3 by performing optical zooming instead of the electronic zooming. Alternatively, for example, the control unit 46 may automatically change the reference region by moving the zoom driving unit and forcibly performing optical zooming so that the resolution becomes equal to or higher than a predetermined value.
When the reference area is changed by the optical zoom, the relative position of each setting area Ar with respect to the reference area changes. In this case, it is preferable that the control unit 46 calculates the amount of change in the reference region, and adjusts the position and size of each setting region Ar based on the calculated amount of change. Thereby, the relative position of each setting area Ar with respect to the reference area is maintained before and after the zoom is performed.
During the execution of the recording step, the moving region Am may follow the movement of the subject and reach the vicinity of the end of the reference region or at least a part of the moving region may exceed the reference region. In this case, information urging the imaging device 10 to move so that the moved movement region Am is accommodated in the reference region (i.e., change of the reference image) can be displayed on the information display screen G3.
Here, in the recording step, when the user moves the imaging device 10 to change the reference region, a known detection sensor such as a gyro sensor provided in the imaging device 10 may detect the movement (for example, a displacement amount) of the imaging device 10. Then, it is preferable to adjust the position and size of each setting area Ar based on the movement of the imaging device 10 detected by the control unit 46. Thus, the relative position of each setting area Ar with respect to the reference area is adjusted according to the changed reference area. At this time, the set area Ar beyond the reference area due to the change of the reference area can be automatically deleted.
In the above-described embodiment, the recorded video is a moving image, that is, a set of a plurality of frame images continuously captured at a constant frame rate. However, the recorded video is not limited to moving images, and may be still images.
For example, the control unit 46 displays the reference picture on the main screen G1 as a live view image, and sets a plurality of setting areas Ar in the reference area. The user selects the recording area Ap from the plurality of setting areas Ar, and the control unit 46 displays a plurality of setting images including the recording image on the sub-screen G2. When the user operates the release button 26 or the like to give a recording instruction, the control unit 46 may record the recorded video as a still image on the recording medium.
In the above-described embodiment, when the object in the reference image is detected, the setting area Ar is automatically set at the position of the detected object. However, the setting region Ar is not limited to the case where the setting region Ac is set immediately at the position of the detected object, and the candidate Ac of the setting region may be set first at the position of the detected object. In this case, the candidate mark K is displayed at the position of the candidate Ac, and when the user performs a predetermined operation on the candidate mark K, the setting region Ar may be set at the position of the candidate Ac.
In the above-described embodiment, the setting area Ar satisfying the stop condition is set such that the display of the setting image is stopped, and the setting area Ar is stored in the imaging device 10 during the execution of the recording step. However, the present invention is not limited to this, and the set area Ar satisfying the suspension condition may be deleted, and the video of the area may not be stored. In this case, it is preferable that, at a stage before the set area Ar satisfying the suspension condition is deleted, the suggestion information concerning the deletion of the set area Ar is displayed on the information display screen G3. Then, a screen (not shown) for selecting whether to delete the setting area Ar may be additionally displayed, and after confirming the user's idea as to whether to delete, the setting area Ar satisfying the termination condition may be deleted.
In the above-described embodiment, the imaging device is a digital camera, but may be a mobile terminal such as a video camera, a mobile phone with an imaging optical system, a smartphone, or a tablet terminal. Further, the imaging lens may be a lens unit externally disposed in an imaging optical system of the mobile terminal.
Description of the symbols
10-photographic apparatus, 12-photographic apparatus body, 13-bayonet, 14-photographic lens, 17-optical unit, 18, 19-optical device, 20-aperture, 21-zoom drive section, 22-focus drive section, 23-aperture drive section, 26-release button, 28-display, 29-electronic viewfinder, 30-operation button, 36-touch panel, 38-shutter, 40-imaging element, 42-pixel, 44-analog signal processing circuit, 46-control section, 47-controller, 48-image processing section, 50-internal memory, 52-card slot, 54-memory card, 56-buffer memory, 61-camera controller, 62-external display, 101-1 st UI screen, 102-2 nd UI screen, 103-3 rd UI screen, 104-4 th UI screen, 105-input screen, Ac-candidates, Am-moving area, Ap-recording area, Ar-setting area, As, At-photographing area, Bt 1-Bt 8-button, Bt 9-return button, f, fx-flag, G1-main screen, G2, G2 x-sub-screen, G3-information display screen, K-candidate flag, L1-optical axis, T-table data.

Claims (18)

1. A display method for displaying an image captured by an imaging device, the display method comprising:
a setting step of setting a plurality of setting regions in a reference region that is a photographing region of a reference image;
a selection step of selecting a recording area as an area for recording a video to be recorded from the plurality of setting areas;
a switching step of, after the selecting step is performed, reselecting the recording area from the plurality of setting areas and switching the recording area; and
and a display step of displaying the recording image, the reference image, and a mark indicating each position of the plurality of setting regions in the reference image.
2. The display method according to claim 1,
the flag includes a boundary of the set area,
in the displaying step, the image within the boundary in the reference image is displayed in an emphasized manner.
3. The display method according to claim 1 or 2,
at the start of the setting step, the flag is displayed in a state where the setting regions do not overlap with each other.
4. The display method according to any one of claims 1 to 3,
in the display step, the mark indicating the position of the recording area in the reference picture and the mark indicating the position of the standby area that is the setting area other than the recording area are displayed in different manners.
5. The display method according to any one of claims 1 to 3,
in the display step, the mark indicating the position of the movement region is displayed so as to be different from the mark indicating the position of the setting region other than the movement region which is the setting region moving following the moving object.
6. The display method according to any one of claims 1 to 5,
in the display step, the recorded video and the reference video are displayed on different displays.
7. The display method according to any one of claims 1 to 6,
the display step includes:
a 1 st display step of displaying the recorded image, the reference image, and the mark; and
a 2 nd display step of displaying the recorded image without displaying the reference image and the mark,
and performing one of the 1 st display step and the 2 nd display step designated by a user.
8. The display method according to claim 7,
in the 2 nd display step, the recorded video and the standby video that is a video in the setting area other than the recording area are displayed on different displays.
9. The display method according to any one of claims 1 to 8,
the number of displays of the mark in the display process is variable,
in the display step, identification information of the mark is displayed for each mark,
when the number of displayed marks is changed, the identification information set in each mark is maintained before and after the change in the number of displayed marks.
10. A display method for displaying an image captured by an imaging device, the display method comprising:
a setting step of setting a plurality of setting regions in a reference region that is a photographing region of a reference image;
a selection step of selecting a recording area as an area for recording a video to be recorded from the plurality of setting areas;
a switching step of, after the selecting step is performed, reselecting the recording area from the plurality of setting areas and switching the recording area;
a determination step of determining priorities for a plurality of standby areas when setting areas other than the recording areas are standby areas; and
a display step of displaying a plurality of standby images which are images of the plurality of standby areas,
when the number of the plurality of standby areas set in the reference area reaches a set value related to the number of the standby images to be displayed, the display step displays the standby images of the standby area selected based on the priority among the plurality of standby areas, or displays the standby images in a size corresponding to the priority.
11. The display method according to claim 10,
in the determining step, the object in the standby images is detected, and the priorities of the standby areas are determined based on information on the detected object.
12. The display method according to claim 10,
in the determining step, an actual result of the standby area selected as the recording area in the past is determined, and the priority of each of the plurality of standby areas is determined based on information on the determined actual result.
13. A display method for displaying an image captured by an imaging device, the display method comprising:
a setting step of setting a plurality of setting regions in a reference region that is a photographing region of a reference image;
a selection step of selecting a recording area as an area for recording a video to be recorded from the plurality of setting areas;
a switching step of, after the selecting step is performed, reselecting the recording area from the plurality of setting areas and switching the recording area;
a display step of displaying a plurality of setting images that are images of the plurality of setting areas; and
and a control step of executing a control process related to a change of the reference region after at least the reference region is determined.
14. The display method according to claim 13,
the control processing includes at least one of processing for suppressing a change of the reference region, processing for causing a change of the reference region, and processing for notifying proposal information related to the change of the reference region.
15. The display method according to claim 13 or 14,
in the case where the imaging apparatus includes an optical device for zooming, the control process is executed in the control step in association with zooming by the optical device.
16. The display method according to any one of claims 13 to 15,
determining the reference area when a recording start instruction from a user is received,
the control process is a process related to a change of the reference area after the time point when the recording start instruction is received.
17. A display method for displaying an image captured by an imaging device, the display method comprising:
a setting step of setting a plurality of setting regions in a reference region that is a photographing region of a reference image;
a selection step of selecting a recording area as an area for recording a video to be recorded from the plurality of setting areas;
a switching step of, after the selecting step is performed, reselecting the recording area from the plurality of setting areas and switching the recording area; and
a display step of displaying a setting image as an image of the setting area,
at least one of the number of the setting images to be displayed in the displaying step and the display size of the setting images is determined based on at least one of the resolution and the aspect ratio of the recording image.
18. The display method according to claim 17,
the number of the setting images displayed in the displaying step is determined based on the at least one item adjusted according to the recording format of the recording image.
CN202080094541.9A 2020-01-30 2020-11-16 Display method Pending CN115004681A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2020013155 2020-01-30
JP2020-013155 2020-01-30
PCT/JP2020/042589 WO2021152960A1 (en) 2020-01-30 2020-11-16 Display method

Publications (1)

Publication Number Publication Date
CN115004681A true CN115004681A (en) 2022-09-02

Family

ID=77078235

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202080094541.9A Pending CN115004681A (en) 2020-01-30 2020-11-16 Display method

Country Status (4)

Country Link
US (1) US20220360704A1 (en)
JP (1) JPWO2021152960A1 (en)
CN (1) CN115004681A (en)
WO (1) WO2021152960A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005175683A (en) * 2003-12-09 2005-06-30 Nikon Corp Digital camera
CN1856023A (en) * 2005-04-21 2006-11-01 佳能株式会社 Imaging apparatus and control method therefor
JP2011175508A (en) * 2010-02-25 2011-09-08 Mazda Motor Corp Parking support system
JP2014042357A (en) * 2008-09-12 2014-03-06 Sanyo Electric Co Ltd Imaging device and image processing device
CN107408300A (en) * 2015-04-14 2017-11-28 索尼公司 Image processing apparatus, image processing method and image processing system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005175683A (en) * 2003-12-09 2005-06-30 Nikon Corp Digital camera
CN1856023A (en) * 2005-04-21 2006-11-01 佳能株式会社 Imaging apparatus and control method therefor
JP2014042357A (en) * 2008-09-12 2014-03-06 Sanyo Electric Co Ltd Imaging device and image processing device
JP2011175508A (en) * 2010-02-25 2011-09-08 Mazda Motor Corp Parking support system
CN107408300A (en) * 2015-04-14 2017-11-28 索尼公司 Image processing apparatus, image processing method and image processing system

Also Published As

Publication number Publication date
JPWO2021152960A1 (en) 2021-08-05
WO2021152960A1 (en) 2021-08-05
US20220360704A1 (en) 2022-11-10

Similar Documents

Publication Publication Date Title
JP4671133B2 (en) Image processing device
US8780200B2 (en) Imaging apparatus and image capturing method which combine a first image with a second image having a wider view
US8139136B2 (en) Image pickup apparatus, control method of image pickup apparatus and image pickup apparatus having function to detect specific subject
KR101467293B1 (en) A method for providing User Interface to display the menu related to the image to be photographed
US7505679B2 (en) Image-taking apparatus
JP2018113551A (en) Imaging apparatus, control method therefor, program, and recording medium
JP4730478B2 (en) IMAGING DEVICE, IMAGING DEVICE CONTROL METHOD, AND PROGRAM
JP4509081B2 (en) Digital camera and digital camera program
US20080309785A1 (en) Photographing apparatus
US10477113B2 (en) Imaging device and control method therefor
JP2005223658A (en) Digital camera
CN115004681A (en) Display method
JP7394151B2 (en) Display method
JP2008160701A (en) Camera and photographic control program for the camera
JP4841111B2 (en) Digital camera device and program
JP5889507B2 (en) Camera device, shooting timing notification method, shooting timing notification program
JP2008219367A (en) Imaging apparatus and imaging method
JP2007074402A (en) Display device, display operation device, processor, photographing device, and display method
JP7418083B2 (en) Display control device, its control method, and its program
JP7345561B2 (en) Video creation method
JP5078779B2 (en) Imaging device
JP2007281533A (en) Image data generating apparatus and image data generating method
JP2013135268A (en) Image processing device and image processing method
JP5975127B2 (en) Imaging apparatus and program
CN116158084A (en) Image pickup apparatus and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination