CN111200686A - Photographed image synthesizing method, terminal, and computer-readable storage medium - Google Patents

Photographed image synthesizing method, terminal, and computer-readable storage medium Download PDF

Info

Publication number
CN111200686A
CN111200686A CN201811378939.6A CN201811378939A CN111200686A CN 111200686 A CN111200686 A CN 111200686A CN 201811378939 A CN201811378939 A CN 201811378939A CN 111200686 A CN111200686 A CN 111200686A
Authority
CN
China
Prior art keywords
image
screen
video
camera
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811378939.6A
Other languages
Chinese (zh)
Inventor
钟宇恒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201811378939.6A priority Critical patent/CN111200686A/en
Publication of CN111200686A publication Critical patent/CN111200686A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0264Details of the structure or mounting of specific components for a camera module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0266Details of the structure or mounting of specific components for a display module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/16Details of telephonic subscriber devices including more than one display unit
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Abstract

The invention discloses a shot image synthesis method, a terminal and a computer readable storage medium, wherein the terminal is provided with a first part and a second part which are movably connected, the first part is provided with a first screen and a first camera, the second half part is provided with a second screen and a second camera, and the shot image synthesis method comprises the following steps: displaying a first image shot by a first camera through a first screen; displaying a second image shot by a second camera through a second screen; selecting a target area from the first image and selecting a replacement area from the second image; and synthesizing the target area and the part outside the non-replacement area in the second image to obtain a synthesized image. According to the technical scheme of the invention, after the two cameras of the double-screen mobile phone respectively shoot images and display the images on the two screens, the target area and the replacement area are respectively selected from the two screens, the two images are synthesized according to the target area and the replacement area, the contents of the two synthesized screen images are fused together, and the synthesized image has more sense of unity.

Description

Photographed image synthesizing method, terminal, and computer-readable storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a captured image synthesis method, a terminal, and a computer-readable storage medium.
Background
The double-screen mobile phone is gradually popularized in the market at present. A dual-screen cell phone typically has two screens and two cameras that can take pictures and display them on the two screens simultaneously.
In the prior art, the photos shot by the two cameras can be spliced into one picture, but after simple splicing, the formed photo obviously comprises two independent parts, and the integral sense of the spliced photo is poor.
Therefore, a new technical solution is needed to synthesize the images shot by the dual-screen mobile phone, so as to ensure that the shot image contents can be fused together.
Disclosure of Invention
The invention mainly aims to provide a shot image synthesis method, a terminal and a computer readable storage medium, aiming at synthesizing images shot by a double-screen mobile phone and ensuring that the shot image contents can be fused together.
To achieve the above object, the present invention provides a shot image synthesizing method, in which a terminal has a first portion and a second portion that are movably connected, the first portion has a first screen and a first camera thereon, and the second portion has a second screen and a second camera thereon, the method comprising: displaying a first image shot by the first camera through the first screen; displaying a second image shot by the second camera through the second screen; selecting a target region from the first image and selecting a replacement region from the second image; and synthesizing the target area and the part of the second image, which is not outside the replacement area, to obtain a synthesized image.
In order to achieve the above object, the present invention further provides a terminal having a first portion and a second portion that are movably connected, the first portion having a first screen and a first camera thereon, the second portion having a second screen and a second camera thereon; the terminal further comprises a processor, a memory and a communication bus; the communication bus is used for realizing connection communication between the processor and the memory; the processor is configured to execute a photographed image synthesizing program stored in the memory to realize the steps of: displaying a first image shot by the first camera through the first screen; displaying a second image shot by the second camera through the second screen; selecting a target region from the first image and selecting a replacement region from the second image; and synthesizing the target area and the part of the second image, which is not outside the replacement area, to obtain a synthesized image.
To achieve the above object, the present invention also provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors to implement the steps of the aforementioned method.
According to the above technical solutions, it can be seen that the shot image synthesis method, the terminal, and the computer-readable storage medium of the present invention have at least the following advantages:
according to the technical scheme of the invention, after the two cameras of the double-screen mobile phone respectively shoot images and display the images on the two screens, the target area and the replacement area are respectively selected from the two screens, the two images are synthesized according to the target area and the replacement area, the contents of the two synthesized screen images are fused together, and the synthesized image has more sense of unity.
Drawings
Fig. 1 is a flowchart of a captured image synthesizing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a captured image synthesizing method according to an embodiment of the present invention;
fig. 3 is a flowchart of a captured image synthesizing method according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a captured image synthesizing method according to an embodiment of the present invention;
fig. 5 is a block diagram of a terminal according to one embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "part", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no peculiar meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
As shown in fig. 1, in an embodiment of the present invention, there is provided a photographed image composing method, in which a terminal has a first portion and a second portion that are movably connected, the first portion has a first screen and a first camera thereon, and the second portion has a second screen and a second camera thereon. A mobile phone with two cameras and two screens is shown in fig. 2.
The method of the embodiment comprises the following steps:
step S110, a first image captured by the first camera is displayed through the first screen.
In the present embodiment, the image includes a picture or a video.
And step S120, displaying a second image shot by the second camera through a second screen.
In this embodiment, the dual-screen machine is opened, the screen display is defined, and the display relationship of the screen corresponding to the camera is automatically (a template can be preset) or manually set. For example, the first screen displays a pre-shot preview, a picture or video taken, and the second screen displays a post-shot preview, a picture or video taken. The photographing or image capturing mode is selected. And opening the camera application, and respectively selecting a front/rear shooting mode, wherein the front camera is used for shooting or photographing, and the rear camera is used for shooting or photographing. Respectively carrying out forward shooting and backward shooting; shooting a front video and a back video; taking a picture before taking the picture and taking a video after taking the picture; the front shot video and the rear shot video are combined into four types: 1) "photo-video" combination: shooting in a front mode and shooting in a rear mode; or the video is shot in a front shooting mode and the picture is shot in a rear shooting mode, namely the front camera and the rear camera take pictures one by one. And (4) photographing and storing after video. 2) "video-video" combination: the front camera and the rear camera are respectively stored after shooting.
Step S130 selects a target region from the first image and a replacement region from the second image.
In this embodiment, a screen display is arbitrarily selected as a daughter board, and a picture region object can be extracted from the daughter board, or some regions in a video within a certain time are extracted, and a spare (object region) is selected. The other screen is displayed as a master, and some areas in the video in the master are replaced by parts of the selected areas (replacement areas) within a certain time. The master content can not specify an area, the daughter board can be dragged to the master position at will, and the corresponding position is used as a replaced area.
Step S140, synthesizing the target region and the portion of the second image outside the non-replacement region to obtain a synthesized image.
In this embodiment, the designated area in the daughter board is stripped and combined into the designated area in the master, and the two are combined into a new photo or video. If the matching degree is higher, the difference parts are automatically merged or replaced.
According to the technical scheme of this embodiment, 1) take the intelligence of the double-screen cell-phone of two cameras to shoot, the enjoyment of shooing after two screens expand can be one screen take a picture screen with the video of shooing, like child's festival performance, can record whole video again in the twinkling of an eye of the snapshot of high definition again, can merge the processing when picture and video can all show out, realize the scene of participating in with the platform, picture or video all can. 2) After shooting, in the video-video combination function, the natural combination of the two videos is experienced, so that the effect that the user runs together with the tiger is achieved. You can see me sitting opposite to the star to eat together, and occasionally can look at the four eyes. 3) If the front screen and the back screen (or the existing video) exist at the same time and under the sound background, the matching degree of the video area shot by the back shot or the front shot is very high, the target (person/object) (to be replaced) in the sub-version is appointed, and the target and the object are automatically combined according to the condition requirement. Typical scenario: in a concert, I and the star appear in a video picture, and the I and the star sing together and even have the antiphonal singing effect.
As shown in fig. 3, in an embodiment of the present invention, there is provided a photographed image composing method, in which a terminal has a first portion and a second portion that are movably connected, the first portion has a first screen and a first camera thereon, and the second portion has a second screen and a second camera thereon. A mobile phone with two cameras and two screens is shown in fig. 2.
The method of the embodiment comprises the following steps:
step S310, a first image captured by the first camera is displayed through the first screen.
And step S320, displaying a second image photographed by the second camera through the second screen.
Step S330, identifying the content which does not exist in the second image in the first image, selecting the target area according to the content, and selecting the replacement area from the second image.
In this embodiment, 1) the first screen displays a photograph as a daughter board, the second screen displays a video as a mother board, a selected target area in the daughter board is designated, and a certain frame or certain areas of certain frames in the mother board are designated as a replacement or overlay target. 2) The first screen display video is used as a daughter board, the second screen display video is used as a mother board, a video target area in a certain time selected in the daughter board is designated (the video target area can be provided with a sound part or can be muted), and a certain area in the mother board within a certain time interval or within the same time interval is selected as a replacement or coverage target. 3) The daughter board and the mother board can be locked at the same or different places in the daughter board and the mother board after the matching degree reaches a certain value (self-defined) through scanning and comparison according to the stored coordinate parameters (longitude and latitude records), background, environment, picture content and the like.
In this embodiment, according to the comparison of the shooting parameters of the pictures, the key buildings or landmarks are scanned and judged, and different content parts in the daughter boards are intelligently combined into the master, i.e., the target area is selected.
Step S340, when the first image is a video image, extracting an image with a preset frame number or a preset time range in the target region for composition, and when the second image is a video image, deleting an image with a preset frame number or a preset time range in the replacement region and then performing composition.
In this embodiment, 1) strip single or multiple photo areas (single daughter board & multiple daughter boards displayed on screen) into some frames of the master video, compositing. 2) And stripping a certain frame of picture area in the video in the daughter board, moving the certain frame of picture area to the mother board picture to replace the specified area, and synthesizing. 3) And stripping certain locking areas in the video in the daughter board in a certain period, moving the daughter board to the locking areas in the master board in a certain period for replacement or overlapping, and synthesizing.
In this example, 1) photo merging: the method comprises the steps of combining the existing photos, or combining and updating and replacing the existing shot photos, selecting any one screen of the photos shot corresponding to the front shot and the back shot as a master plate, selecting the existing photos or the photos shot corresponding to the back shot and the front shot as a sub-plate on a second screen (the sub-plate is an effect picture part to be replaced), and dragging and replacing positions, wherein the method can be realized by dragging two screens. The picture achieves the natural effect of observing, adjusting and combining the current affair effect.
Examples are: and selecting the picture 1 as a daughter board to be placed on a first screen, and selecting a content area required in the picture 1 to be locked for standby. And selecting a picture 2 which can be combined as a comparison picture, placing the picture 2 as a master on a second screen, and directly dragging the locking area of the picture 1 to the master of the second screen for replacement. (the master can also select a certain area to be replaced by the lock, but is not necessary)
2) Video-containing processing principles: the photos are merged into a certain frame of video, or a certain picture in the video is intercepted and replaced into the photos.
And the picture- > video is divided into the picture content of a certain frame of the selected video combined on the current shot or the existing picture, the picture selected by the first screen is used as a daughter board, and the video displayed by the second screen is used as a master board. The picture content in the daughter board selects the area to be deducted, and the number of the area is single or multiple. And locking the picture of the replaced or overlapped area in the video displayed by the second screen, moving the content in the daughter board to the corresponding position of the master board, and synthesizing.
And video- > pictures are divided into currently shot or existing videos, contents of certain areas in the selected pictures are combined, the pictures in the video area selected by the first screen are used as daughter boards, and the pictures displayed by the second screen are used as mother boards. Video picture content in the daughter board, selecting the area to be deducted, single or multiple. And moving the content in the daughter board to the corresponding position of the master mask, replacing or overlapping, and synthesizing.
Video- > video, some content in another video is merged on the video that is currently taken or has been taken. And (4) overlapping and synthesizing after replacement. One video is selected to be displayed on any screen as a master plate, the other selected video is selected to be a sub-plate on a second screen (the sub-plate is a part of video stream, certain frame pictures and the like which are about to be used when being extracted), and the content in the sub-plate is selected and locked. Locking the approximate area to be replaced in the master, dragging the content of the second screen master into the first screen master, and replacing the area with the overlapped locking time period. Slightly adjusting and combining.
One master can be selected to correspond to a plurality of daughter boards for editing, for example: the video recorded in the front is used as a master, and the photos or the small videos of the designated area to be replaced or added are designated as different sub-versions (a daughter board 1 and a daughter board 2), and the sub-versions are replaced according to the variable area of the master so as to achieve a natural synthesis effect.
Step S350, during the synthesis, the environmental parameters corresponding to the target region and the environmental parameters corresponding to the portion outside the non-replacement region in the second image are adjusted to a preset goodness of fit.
In the embodiment, when the videos are combined and stored in the first screen and the second screen, the conditions of the background, the spatial coordinates, the sound and the like are almost consistent, and the videos can be automatically combined according to the judgment of the parameters.
In this embodiment, for a video, the environmental scene, the latitude and longitude of the coordinates of the shooting location, and the contrast coincidence degree of the background sound of each frame of the video shot by the two cameras reach a certain value ratio, for example, 70% or more, so as to achieve automatic merging (the merging mode may be in a same star concert mode), retain the same characteristic part, add or replace different matching areas, and combine on the master mask, during which, a matching parameter list may be listed for the user to select (as sound, same sound & same picture, where the picture refers to judging the same latitude and longitude, may be a certain mark building, may be a picture selected and locked by himself in advance, etc.), and the user adjusts, merges and saves the picture according to the conditions. If the matching degree does not reach the standard, for example, the matching degree is lower than 70%, specified adjustment is executed, and operations such as replacement, addition, deletion and the like are performed by manually selecting or presetting the area, and the area is stored after the synthesis requirement is met. (for the same scene, the matching degree is higher, the use probability is large, such as the above concert scene)
Similarly, the automatic merging function can be realized for the photos, and if the matching degree of the technical parameters (focal length, place and time) of the shot photos or the matching degree of the background scanning is more than a certain value (such as 70%), the difference partial images are selectively combined together, and the same part is reserved. The merge is automatic.
Examples of processing for automatic merging: many automatic combination modes can be made, such as a concert mode, for example, on a concert, the star picture and the own picture are naturally combined into a picture video. The star screen is displayed as a mother board, and an alternative part in the star picture is locked and selected (for a video, an evasive means for avoiding the automatic coverage of the content of the mother board exists). This screen is displayed as a daughter board with the video portion of itself automatically fused into the master before locking in place in the selected area. The dream of singing face to face with the star is achieved. It is also possible to design the same activity pattern, where two videos of different target persons, active in the same area, are automatically merged into one.
Furthermore, if a multi-angle intelligent shooting system is added in the shooting process, the interestingness is increased. Synthetically, the freedom of choice is greater. Namely, the device can be rotated to shoot at multiple angles, such as 360-degree rotation in space to track and shoot. The camera double screen cell-phone, 360 degrees rotation control main parts before/after taking, wherein the camera is rotatable also can not support rotatoryly (if the camera is rotatory, the screen can not rotate and also can realize the full angle and shoot). The mobile phone system is built in or the host computer with software system is arranged outside, and the software supports the intelligent shooting, (intelligent) image/image synthesis system. Structurally, the double-screen machine with the double cameras supports the design requirement of multi-angle rotation, and the main body comprises a multi-angle (such as 360-degree space) control chip, a built-in intelligent follow-up shooting system and intelligent image/image synthesis application, as shown in fig. 4.
① automatic focusing, automatic target locking (face recognition, animal recognition, etc.), automatic focusing for recognizing moving area or specially defined area (such as human face, animal), template storage in background for calling, user selection, manual adjustment of selected area, automatic focusing after selection, high-definition shooting, intelligent combination of stored data (such as video or pictures of star scene, ps and star)
② Multi-Angle rotation tracking System & recording track, the Multi-Angle rotation tracking System is based on the movement of the identification and locking area target to follow the shooting, combines the automatic focusing and automatic locking characteristics, simultaneously records the movement track, space dimension, the recording track refers to the movement track in the video shooting process, divided into the locking area track (can be regarded as the small target movement) and the whole video shooting movement track (can be realized by the chip of the rotating shaft and software recording method), the track parameters are mainly used to simulate the replacement scene (such as the selection or the rephotography of the characteristics in the daughter board), the demand is also for the user to naturally merge the videos shot by the front and back cameras.
The pictures or videos shot from multiple angles, particularly videos, can record the moving track of the target, for example, the target rotates 30 degrees left and then rotates 30 degrees right to move, at the moment, the picture running with the star or running with the tiger is required to be simulated, the corresponding characteristic content in the daughter board is detailed and selected, if the characteristic content has the left rotation of 30 degrees and the right rotation of 30 degrees (or certain characteristic actions are simulated again), and therefore special planning (selection or retaking of special targets) and adjustment are reversely carried out according to the track recording parameters, and the videos can be automatically combined.
The automatic shooting may be, for example, automatic area recognition, face recognition, or automatic tracking of moving people and objects. And automatically or manually selecting and locking the area to perform high-definition automatic follow shooting. A handheld shooting mode: moving an area S (X, Y), X and Y represent coordinate areas, S represents a selected area, a video H (X, Y, Z (T)) area, X and Y represent video coordinate areas, Z represents a three-dimensional space coefficient and is related to time, T represents time, and H represents XYZ variation in T time, and the conversion is realized by computer language.
③ fixed object shooting refers to that a user actively controls the focal distance and can adjust the appointed range to realize high-definition shooting of the area part.
④ the intelligent matching of front and back camera refers to automatically selecting any object previewed or shot by a camera as a focus reference, and automatically adjusting corresponding parameters (size and chromatic aberration seen by human eyes, distance judgment, etc.) according to the parameters such as the focus of another camera, so that the picture and video shot by the front and back cameras can be automatically combined to achieve a more natural effect.
According to the technical scheme of the embodiment, one screen of the double-screen mobile phone can rotate 360 degrees in space in structural design, a camera is opened, a starting instruction (dynamic shooting or photo shooting) is clicked after a target or a moving object is manually or automatically selected and locked, people/objects in front are locked, and the mobile shooting is automatically carried out along with the movement of the people/objects, so that the automatic panoramic shooting can be completed. In the process of shooting, if the user starts the intelligent shooting mode, the focal length of the rear camera or the front camera can be automatically adjusted according to the distance matching of people/objects locked by the front camera or the rear camera, so that the optimal distance of people/objects shot by the front camera and the rear camera is reached, and the combination is convenient. The best scene is as follows: the method realizes that the star or the idol is in front of eyes, i can naturally merge pictures or images with the best effect although the star or the idol is not beside the eyes, and the user can see that the user starts the bumper car video together with the height picture of the user.
As shown in fig. 5, in one embodiment of the present invention, a terminal is provided, the terminal having a first portion and a second portion that are movably connected, the first portion having a first screen 510 and a first camera 520 thereon, the second portion having a second screen 530 and a second camera 540 thereon; the terminal further comprises a processor 550, a memory 560 and a communication bus 570;
communication bus 570 is used to enable connective communication between processor 550 and memory 560;
the processor 550 is configured to execute the photographed image synthesizing program 560 stored in the memory to implement the steps of:
a first image shot by a first camera is displayed through a first screen.
In the present embodiment, the image includes a picture or a video.
And displaying a second image shot by the second camera through the second screen.
In this embodiment, the dual-screen machine is opened, the screen display is defined, and the display relationship of the screen corresponding to the camera is automatically (a template can be preset) or manually set. For example, the first screen displays a pre-shot preview, a picture or video taken, and the second screen displays a post-shot preview, a picture or video taken. The photographing or image capturing mode is selected. And opening the camera application, and respectively selecting a front/rear shooting mode, wherein the front camera is used for shooting or photographing, and the rear camera is used for shooting or photographing. Respectively carrying out forward shooting and backward shooting; shooting a front video and a back video; taking a picture before taking the picture and taking a video after taking the picture; the front shot video and the rear shot video are combined into four types: 1) "photo-video" combination: shooting in a front mode and shooting in a rear mode; or the video is shot in a front shooting mode and the picture is shot in a rear shooting mode, namely the front camera and the rear camera take pictures one by one. And (4) photographing and storing after video. 2) "video-video" combination: the front camera and the rear camera are respectively stored after shooting.
A target region is selected from the first image and a replacement region is selected from the second image.
In this embodiment, a screen display is arbitrarily selected as a daughter board, and a picture region object can be extracted from the daughter board, or some regions in a video within a certain time are extracted, and a spare (object region) is selected. The other screen is displayed as a master, and some areas in the video in the master are replaced by parts of the selected areas (replacement areas) within a certain time. The master content can not specify an area, the daughter board can be dragged to the master position at will, and the corresponding position is used as a replaced area.
And synthesizing the target area and the part outside the non-replacement area in the second image to obtain a synthesized image.
In this embodiment, the designated area in the daughter board is stripped and combined into the designated area in the master, and the two are combined into a new photo or video. If the matching degree is higher, the difference parts are automatically merged or replaced.
According to the technical scheme of this embodiment, 1) take the intelligence of the double-screen cell-phone of two cameras to shoot, the enjoyment of shooing after two screens expand can be one screen take a picture screen with the video of shooing, like child's festival performance, can record whole video again in the twinkling of an eye of the snapshot of high definition again, can merge the processing when picture and video can all show out, realize the scene of participating in with the platform, picture or video all can. 2) After shooting, in the video-video combination function, the natural combination of the two videos is experienced, so that the effect that the user runs together with the tiger is achieved. You can see me sitting opposite to the star to eat together, and occasionally can look at the four eyes. 3) If the front screen and the back screen (or the existing video) exist at the same time and under the sound background, the matching degree of the video area shot by the back shot or the front shot is very high, the target (person/object) (to be replaced) in the sub-version is appointed, and the target and the object are automatically combined according to the condition requirement. Typical scenario: in a concert, I and the star appear in a video picture, and the I and the star sing together and even have the antiphonal singing effect.
As shown in fig. 5, in one embodiment of the present invention, a terminal is provided, the terminal having a first portion and a second portion that are movably connected, the first portion having a first screen 510 and a first camera 520 thereon, the second portion having a second screen 530 and a second camera 540 thereon; the terminal further comprises a processor 550, a memory 560 and a communication bus 570;
communication bus 570 is used to enable connective communication between processor 550 and memory 560;
the processor 550 is configured to execute the photographed image synthesizing program 560 stored in the memory to implement the steps of:
a first image shot by a first camera is displayed through a first screen.
And displaying a second image shot by the second camera through the second screen.
And identifying the content which does not exist in the second image in the first image, selecting a target area according to the content, and selecting a replacement area from the second image.
In this embodiment, 1) the first screen displays a photograph as a daughter board, the second screen displays a video as a mother board, a selected target area in the daughter board is designated, and a certain frame or certain areas of certain frames in the mother board are designated as a replacement or overlay target. 2) The first screen display video is used as a daughter board, the second screen display video is used as a mother board, a video target area in a certain time selected in the daughter board is designated (the video target area can be provided with a sound part or can be muted), and a certain area in the mother board within a certain time interval or within the same time interval is selected as a replacement or coverage target. 3) The daughter board and the mother board can be locked at the same or different places in the daughter board and the mother board after the matching degree reaches a certain value (self-defined) through scanning and comparison according to the stored coordinate parameters (longitude and latitude records), background, environment, picture content and the like.
In this embodiment, according to the comparison of the shooting parameters of the pictures, the key buildings or landmarks are scanned and judged, and different content parts in the daughter boards are intelligently combined into the master, i.e., the target area is selected.
When the first image is a video image, images with preset frame numbers or preset time ranges in the target area are extracted for synthesis, and when the second image is a video image, the images with the preset frame numbers or the preset time ranges in the replacement area are deleted and then synthesized.
In this embodiment, 1) strip single or multiple photo areas (single daughter board & multiple daughter boards displayed on screen) into some frames of the master video, compositing. 2) And stripping a certain frame of picture area in the video in the daughter board, moving the certain frame of picture area to the mother board picture to replace the specified area, and synthesizing. 3) And stripping certain locking areas in the video in the daughter board in a certain period, moving the daughter board to the locking areas in the master board in a certain period for replacement or overlapping, and synthesizing.
In this example, 1) photo merging: the method comprises the steps of combining the existing photos, or combining and updating and replacing the existing shot photos, selecting any one screen of the photos shot corresponding to the front shot and the back shot as a master plate, selecting the existing photos or the photos shot corresponding to the back shot and the front shot as a sub-plate on a second screen (the sub-plate is an effect picture part to be replaced), and dragging and replacing positions, wherein the method can be realized by dragging two screens. The picture achieves the natural effect of observing, adjusting and combining the current affair effect.
Examples are: and selecting the picture 1 as a daughter board to be placed on a first screen, and selecting a content area required in the picture 1 to be locked for standby. And selecting a picture 2 which can be combined as a comparison picture, placing the picture 2 as a master on a second screen, and directly dragging the locking area of the picture 1 to the master of the second screen for replacement. (the master can also select a certain area to be replaced by the lock, but is not necessary)
2) Video-containing processing principles: the photos are merged into a certain frame of video, or a certain picture in the video is intercepted and replaced into the photos.
And the picture- > video is divided into the picture content of a certain frame of the selected video combined on the current shot or the existing picture, the picture selected by the first screen is used as a daughter board, and the video displayed by the second screen is used as a master board. The picture content in the daughter board selects the area to be deducted, and the number of the area is single or multiple. And locking the picture of the replaced or overlapped area in the video displayed by the second screen, moving the content in the daughter board to the corresponding position of the master board, and synthesizing.
And video- > pictures are divided into currently shot or existing videos, contents of certain areas in the selected pictures are combined, the pictures in the video area selected by the first screen are used as daughter boards, and the pictures displayed by the second screen are used as mother boards. Video picture content in the daughter board, selecting the area to be deducted, single or multiple. And moving the content in the daughter board to the corresponding position of the master mask, replacing or overlapping, and synthesizing.
Video- > video, some content in another video is merged on the video that is currently taken or has been taken. And (4) overlapping and synthesizing after replacement. One video is selected to be displayed on any screen as a master plate, the other selected video is selected to be a sub-plate on a second screen (the sub-plate is a part of video stream, certain frame pictures and the like which are about to be used when being extracted), and the content in the sub-plate is selected and locked. Locking the approximate area to be replaced in the master, dragging the content of the second screen master into the first screen master, and replacing the area with the overlapped locking time period. Slightly adjusting and combining.
One master can be selected to correspond to a plurality of daughter boards for editing, for example: the video recorded in the front is used as a master, and the photos or the small videos of the designated area to be replaced or added are designated as different sub-versions (a daughter board 1 and a daughter board 2), and the sub-versions are replaced according to the variable area of the master so as to achieve a natural synthesis effect.
Step S350, during the synthesis, the environmental parameters corresponding to the target region and the environmental parameters corresponding to the portion outside the non-replacement region in the second image are adjusted to a preset goodness of fit.
In the embodiment, when the videos are combined and stored in the first screen and the second screen, the conditions of the background, the spatial coordinates, the sound and the like are almost consistent, and the videos can be automatically combined according to the judgment of the parameters.
In this embodiment, for a video, the environmental scene, the latitude and longitude of the coordinates of the shooting location, and the contrast coincidence degree of the background sound of each frame of the video shot by the two cameras reach a certain value ratio, for example, 70% or more, so as to achieve automatic merging (the merging mode may be in a same star concert mode), retain the same characteristic part, add or replace different matching areas, and combine on the master mask, during which, a matching parameter list may be listed for the user to select (as sound, same sound & same picture, where the picture refers to judging the same latitude and longitude, may be a certain mark building, may be a picture selected and locked by himself in advance, etc.), and the user adjusts, merges and saves the picture according to the conditions. If the matching degree does not reach the standard, for example, the matching degree is lower than 70%, specified adjustment is executed, and operations such as replacement, addition, deletion and the like are performed by manually selecting or presetting the area, and the area is stored after the synthesis requirement is met. (for the same scene, the matching degree is higher, the use probability is large, such as the above concert scene)
Similarly, the automatic merging function can be realized for the photos, and if the matching degree of the technical parameters (focal length, place and time) of the shot photos or the matching degree of the background scanning is more than a certain value (such as 70%), the difference partial images are selectively combined together, and the same part is reserved. The merge is automatic.
Examples of processing for automatic merging: many automatic combination modes can be made, such as a concert mode, for example, on a concert, the star picture and the own picture are naturally combined into a picture video. The star screen is displayed as a mother board, and an alternative part in the star picture is locked and selected (for a video, an evasive means for avoiding the automatic coverage of the content of the mother board exists). This screen is displayed as a daughter board with the video portion of itself automatically fused into the master before locking in place in the selected area. The dream of singing face to face with the star is achieved. It is also possible to design the same activity pattern, where two videos of different target persons, active in the same area, are automatically merged into one.
Furthermore, if a multi-angle intelligent shooting system is added in the shooting process, the interestingness is increased. Synthetically, the freedom of choice is greater. Namely, the device can be rotated to shoot at multiple angles, such as 360-degree rotation in space to track and shoot. The camera double screen cell-phone, 360 degrees rotation control main parts before/after taking, wherein the camera is rotatable also can not support rotatoryly (if the camera is rotatory, the screen can not rotate and also can realize the full angle and shoot). The mobile phone system is built in or the host computer with software system is arranged outside, and the software supports the intelligent shooting, (intelligent) image/image synthesis system. Structurally, the double-screen machine with the double cameras supports the design requirement of multi-angle rotation, and the main body comprises a multi-angle (such as 360-degree space) control chip, a built-in intelligent follow-up shooting system and intelligent image/image synthesis application, as shown in fig. 4.
① automatic focusing, automatic target locking (face recognition, animal recognition, etc.), automatic focusing for recognizing moving area or specially defined area (such as human face, animal), template storage in background for calling, user selection, manual adjustment of selected area, automatic focusing after selection, high-definition shooting, intelligent combination of stored data (such as video or pictures of star scene, ps and star)
② Multi-Angle rotation tracking System & recording track, the Multi-Angle rotation tracking System is based on the movement of the identification and locking area target to follow the shooting, combines the automatic focusing and automatic locking characteristics, simultaneously records the movement track, space dimension, the recording track refers to the movement track in the video shooting process, divided into the locking area track (can be regarded as the small target movement) and the whole video shooting movement track (can be realized by the chip of the rotating shaft and software recording method), the track parameters are mainly used to simulate the replacement scene (such as the selection or the rephotography of the characteristics in the daughter board), the demand is also for the user to naturally merge the videos shot by the front and back cameras.
The pictures or videos shot from multiple angles, particularly videos, can record the moving track of the target, for example, the target rotates 30 degrees left and then rotates 30 degrees right to move, at the moment, the picture running with the star or running with the tiger is required to be simulated, the corresponding characteristic content in the daughter board is detailed and selected, if the characteristic content has the left rotation of 30 degrees and the right rotation of 30 degrees (or certain characteristic actions are simulated again), and therefore special planning (selection or retaking of special targets) and adjustment are reversely carried out according to the track recording parameters, and the videos can be automatically combined.
The automatic shooting may be, for example, automatic area recognition, face recognition, or automatic tracking of moving people and objects. And automatically or manually selecting and locking the area to perform high-definition automatic follow shooting. A handheld shooting mode: moving an area S (X, Y), X and Y represent coordinate areas, S represents a selected area, a video H (X, Y, Z (T)) area, X and Y represent video coordinate areas, Z represents a three-dimensional space coefficient and is related to time, T represents time, and H represents XYZ variation in T time, and the conversion is realized by computer language.
③ fixed object shooting refers to that a user actively controls the focal distance and can adjust the appointed range to realize high-definition shooting of the area part.
④ the intelligent matching of front and back camera refers to automatically selecting any object previewed or shot by a camera as a focus reference, and automatically adjusting corresponding parameters (size and chromatic aberration seen by human eyes, distance judgment, etc.) according to the parameters such as the focus of another camera, so that the picture and video shot by the front and back cameras can be automatically combined to achieve a more natural effect.
According to the technical scheme of the embodiment, one screen of the double-screen mobile phone can rotate 360 degrees in space in structural design, a camera is opened, a starting instruction (dynamic shooting or photo shooting) is clicked after a target or a moving object is manually or automatically selected and locked, people/objects in front are locked, and the mobile shooting is automatically carried out along with the movement of the people/objects, so that the automatic panoramic shooting can be completed. In the process of shooting, if the user starts the intelligent shooting mode, the focal length of the rear camera or the front camera can be automatically adjusted according to the distance matching of people/objects locked by the front camera or the rear camera, so that the optimal distance of people/objects shot by the front camera and the rear camera is reached, and the combination is convenient. The best scene is as follows: the method realizes that the star or the idol is in front of eyes, i can naturally merge pictures or images with the best effect although the star or the idol is not beside the eyes, and the user can see that the user starts the bumper car video together with the height picture of the user.
Embodiments of the present invention also provide a computer readable storage medium storing one or more programs, the one or more programs being executable by one or more processors to perform the steps of:
a first image shot by a first camera is displayed through a first screen.
In the present embodiment, the image includes a picture or a video.
And displaying a second image shot by the second camera through the second screen.
In this embodiment, the dual-screen machine is opened, the screen display is defined, and the display relationship of the screen corresponding to the camera is automatically (a template can be preset) or manually set. For example, the first screen displays a pre-shot preview, a picture or video taken, and the second screen displays a post-shot preview, a picture or video taken. The photographing or image capturing mode is selected. And opening the camera application, and respectively selecting a front/rear shooting mode, wherein the front camera is used for shooting or photographing, and the rear camera is used for shooting or photographing. Respectively carrying out forward shooting and backward shooting; shooting a front video and a back video; taking a picture before taking the picture and taking a video after taking the picture; the front shot video and the rear shot video are combined into four types: 1) "photo-video" combination: shooting in a front mode and shooting in a rear mode; or the video is shot in a front shooting mode and the picture is shot in a rear shooting mode, namely the front camera and the rear camera take pictures one by one. And (4) photographing and storing after video. 2) "video-video" combination: the front camera and the rear camera are respectively stored after shooting.
A target region is selected from the first image and a replacement region is selected from the second image.
In this embodiment, a screen display is arbitrarily selected as a daughter board, and a picture region object can be extracted from the daughter board, or some regions in a video within a certain time are extracted, and a spare (object region) is selected. The other screen is displayed as a master, and some areas in the video in the master are replaced by parts of the selected areas (replacement areas) within a certain time. The master content can not specify an area, the daughter board can be dragged to the master position at will, and the corresponding position is used as a replaced area.
And synthesizing the target area and the part outside the non-replacement area in the second image to obtain a synthesized image.
In this embodiment, the designated area in the daughter board is stripped and combined into the designated area in the master, and the two are combined into a new photo or video. If the matching degree is higher, the difference parts are automatically merged or replaced.
According to the technical scheme of this embodiment, 1) take the intelligence of the double-screen cell-phone of two cameras to shoot, the enjoyment of shooing after two screens expand can be one screen take a picture screen with the video of shooing, like child's festival performance, can record whole video again in the twinkling of an eye of the snapshot of high definition again, can merge the processing when picture and video can all show out, realize the scene of participating in with the platform, picture or video all can. 2) After shooting, in the video-video combination function, the natural combination of the two videos is experienced, so that the effect that the user runs together with the tiger is achieved. You can see me sitting opposite to the star to eat together, and occasionally can look at the four eyes. 3) If the front screen and the back screen (or the existing video) exist at the same time and under the sound background, the matching degree of the video area shot by the back shot or the front shot is very high, the target (person/object) (to be replaced) in the sub-version is appointed, and the target and the object are automatically combined according to the condition requirement. Typical scenario: in a concert, I and the star appear in a video picture, and the I and the star sing together and even have the antiphonal singing effect.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A method for synthesizing a captured image, wherein a terminal has a first portion and a second portion movably connected, the first portion having a first screen and a first camera thereon, the second portion having a second screen and a second camera thereon, the method comprising:
displaying a first image shot by the first camera through the first screen;
displaying a second image shot by the second camera through the second screen;
selecting a target region from the first image and selecting a replacement region from the second image;
and synthesizing the target area and the part of the second image, which is not outside the replacement area, to obtain a synthesized image.
2. The method of claim 1, wherein the synthesizing the target region and the portion of the second image not outside the replacement region to obtain a synthesized image comprises:
and when the first image is a video image, extracting images with preset frame numbers or preset time ranges in the target area for synthesis.
3. The method of claim 1, wherein the synthesizing the target region and the portion of the second image not outside the replacement region to obtain a synthesized image comprises:
and when the second image is a video image, deleting images with preset frame numbers or preset time ranges in the replacement region and then synthesizing.
4. The method of claim 1, wherein selecting a target region from the first image comprises:
and identifying content which is not present in the second image in the first image, and selecting the target area according to the content.
5. The method of claim 1, wherein the synthesizing the target region and the portion of the second image not outside the replacement region to obtain a synthesized image comprises:
and adjusting the environmental parameters corresponding to the target area and the parts of the second image, which are not outside the replacement area, to preset goodness of fit during synthesis.
6. A terminal having an articulating first portion with a first screen and a first camera thereon and a second portion with a second screen and a second camera thereon; the terminal further comprises a processor, a memory and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute a photographed image synthesizing program stored in the memory to realize the steps of:
displaying a first image shot by the first camera through the first screen;
displaying a second image shot by the second camera through the second screen;
selecting a target region from the first image and selecting a replacement region from the second image;
and synthesizing the target area and the part of the second image, which is not outside the replacement area, to obtain a synthesized image.
7. The terminal according to claim 6, wherein in synthesizing the target region and the portion of the second image other than the replacement region into a synthesized image, the processor executes the captured image synthesizing program to implement the steps of:
and when the first image is a video image, extracting images with preset frame numbers or preset time ranges in the target area for synthesis.
8. The terminal according to claim 6, wherein in synthesizing the target region and the portion of the second image other than the replacement region into a synthesized image, the processor executes the captured image synthesizing program to implement the steps of:
and when the second image is a video image, deleting images with preset frame numbers or preset time ranges in the replacement region and then synthesizing.
9. The terminal of claim 6, wherein in selecting a target region from the first image, the processor executes the captured image composition program to perform the steps of:
and identifying content which is not present in the second image in the first image, and selecting the target area according to the content.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores one or more programs which are executable by one or more processors to implement the steps of any one of claims 1 to 5.
CN201811378939.6A 2018-11-19 2018-11-19 Photographed image synthesizing method, terminal, and computer-readable storage medium Pending CN111200686A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811378939.6A CN111200686A (en) 2018-11-19 2018-11-19 Photographed image synthesizing method, terminal, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811378939.6A CN111200686A (en) 2018-11-19 2018-11-19 Photographed image synthesizing method, terminal, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN111200686A true CN111200686A (en) 2020-05-26

Family

ID=70745793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811378939.6A Pending CN111200686A (en) 2018-11-19 2018-11-19 Photographed image synthesizing method, terminal, and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN111200686A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112954221A (en) * 2021-03-11 2021-06-11 深圳市几何数字技术服务有限公司 Method for real-time photo shooting
CN114745508A (en) * 2022-06-13 2022-07-12 荣耀终端有限公司 Shooting method, terminal device and storage medium
WO2023016214A1 (en) * 2021-08-12 2023-02-16 惠州Tcl云创科技有限公司 Video processing method and apparatus, and mobile terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580910A (en) * 2015-01-09 2015-04-29 宇龙计算机通信科技(深圳)有限公司 Image synthesis method and system based on front camera and rear camera
CN106027900A (en) * 2016-06-22 2016-10-12 维沃移动通信有限公司 Photographing method and mobile terminal
JP2018023069A (en) * 2016-08-05 2018-02-08 フリュー株式会社 Game player for creating photograph and display method
CN107820016A (en) * 2017-11-29 2018-03-20 努比亚技术有限公司 Shooting display methods, double screen terminal and the computer-readable storage medium of double screen terminal
CN108093171A (en) * 2017-11-30 2018-05-29 努比亚技术有限公司 A kind of photographic method, terminal and computer readable storage medium
CN108418916A (en) * 2018-02-28 2018-08-17 努比亚技术有限公司 Image capturing method, mobile terminal based on double-sided screen and readable storage medium storing program for executing

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104580910A (en) * 2015-01-09 2015-04-29 宇龙计算机通信科技(深圳)有限公司 Image synthesis method and system based on front camera and rear camera
CN106027900A (en) * 2016-06-22 2016-10-12 维沃移动通信有限公司 Photographing method and mobile terminal
JP2018023069A (en) * 2016-08-05 2018-02-08 フリュー株式会社 Game player for creating photograph and display method
CN107820016A (en) * 2017-11-29 2018-03-20 努比亚技术有限公司 Shooting display methods, double screen terminal and the computer-readable storage medium of double screen terminal
CN108093171A (en) * 2017-11-30 2018-05-29 努比亚技术有限公司 A kind of photographic method, terminal and computer readable storage medium
CN108418916A (en) * 2018-02-28 2018-08-17 努比亚技术有限公司 Image capturing method, mobile terminal based on double-sided screen and readable storage medium storing program for executing

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112954221A (en) * 2021-03-11 2021-06-11 深圳市几何数字技术服务有限公司 Method for real-time photo shooting
WO2023016214A1 (en) * 2021-08-12 2023-02-16 惠州Tcl云创科技有限公司 Video processing method and apparatus, and mobile terminal
CN114745508A (en) * 2022-06-13 2022-07-12 荣耀终端有限公司 Shooting method, terminal device and storage medium
CN114745508B (en) * 2022-06-13 2023-10-31 荣耀终端有限公司 Shooting method, terminal equipment and storage medium

Similar Documents

Publication Publication Date Title
US9927948B2 (en) Image display apparatus and image display method
EP2779628B1 (en) Image processing method and device
CN102938825B (en) A kind ofly to take pictures and the method for video and device
JP4153146B2 (en) Image control method for camera array and camera array
CN106934777B (en) Scanning image acquisition method and device
CN105072314A (en) Virtual studio implementation method capable of automatically tracking objects
CN110072058B (en) Image shooting device and method and terminal
CN110545378B (en) Intelligent recognition shooting system and method for multi-person scene
CN111200686A (en) Photographed image synthesizing method, terminal, and computer-readable storage medium
US20120133816A1 (en) System and method for user guidance of photographic composition in image acquisition systems
CN113329172B (en) Shooting method and device and electronic equipment
CN111083371A (en) Shooting method and electronic equipment
WO2023016214A1 (en) Video processing method and apparatus, and mobile terminal
CN111093022A (en) Image shooting method, device, terminal and computer storage medium
CN114125179B (en) Shooting method and device
JP2004239967A (en) Projector
JP2001036898A (en) Camera system for generating panoramic video
CN110796690B (en) Image matching method and image matching device
DE102019133659A1 (en) Electronic device, control method, program and computer readable medium
CN112672057B (en) Shooting method and device
JP2022003818A (en) Image display system, image display program, image display method, and server
CN113542463A (en) Video shooting device and method based on folding screen, storage medium and mobile terminal
TW202211667A (en) Full fov conference camera device
US20230328364A1 (en) Processing method and processing device
WO2022226745A1 (en) Photographing method, control apparatus, photographing device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200526

RJ01 Rejection of invention patent application after publication