WO2023001107A1 - 摄影图像处理方法及设备 - Google Patents
摄影图像处理方法及设备 Download PDFInfo
- Publication number
- WO2023001107A1 WO2023001107A1 PCT/CN2022/106268 CN2022106268W WO2023001107A1 WO 2023001107 A1 WO2023001107 A1 WO 2023001107A1 CN 2022106268 W CN2022106268 W CN 2022106268W WO 2023001107 A1 WO2023001107 A1 WO 2023001107A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- images
- dynamic object
- series
- static
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims description 16
- 230000003068 static effect Effects 0.000 claims abstract description 152
- 238000012545 processing Methods 0.000 claims abstract description 118
- 238000000034 method Methods 0.000 claims description 91
- 230000003287 optical effect Effects 0.000 claims description 35
- 230000008569 process Effects 0.000 claims description 28
- 238000012549 training Methods 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 7
- 230000014509 gene expression Effects 0.000 claims description 6
- 238000013499 data model Methods 0.000 claims description 5
- 101000863856 Homo sapiens Shiftless antiviral inhibitor of ribosomal frameshifting protein Proteins 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 27
- 238000010586 diagram Methods 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000003384 imaging method Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000008921 facial expression Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000012634 optical imaging Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/68—Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
- H04N23/681—Motion detection
- H04N23/6811—Motion detection based on the image signal
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/71—Circuitry for evaluating the brightness variation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Definitions
- the present disclosure relates to image processing, and more particularly to photographic image processing.
- shutter speed is the parameter that controls the length of exposure when taking a photo.
- Slow shutter means to obtain a long exposure by reducing the shutter speed and extending the shutter time, so as to achieve special dynamic effects. For example, use a slow shutter to capture the flow of flowing water and clouds, and record bright trails of moving objects, such as light trails of vehicles, when shooting night scenes.
- An object of the present disclosure is to improve captured image processing so that high-quality images with slow shutter photography effects can be obtained.
- an electronic device for processing a series of images obtained for shooting a scene containing static objects and dynamic objects
- the electronic device comprising a processing circuit controlled by configured to: generate a processed image that does not include image content related to a dynamic object based on a particular image selected from the series of images that includes a static object; generate motion of the dynamic object in at least a portion of the images in the series of images trajectory; and obtaining a final image based on both the processed image and the generated trajectory of the dynamic object.
- a method for processing a series of images obtained for shooting a scene containing static objects and dynamic objects comprising: based on a selection from the generating a processed image that does not contain image content related to a dynamic object for a specific image in a series of images that includes a static object; generating a trajectory of a dynamic object in at least a portion of the images in the series of images; and based on the processed image and the generated trajectory of the dynamic object to obtain the final image.
- an apparatus for processing a sequence of images comprising at least one processor and at least one storage device having stored thereon instructions which are executed by the At least one processor, when executed, may cause the at least one processor to perform the methods as described herein.
- a storage medium storing instructions which, when executed by a processor, cause the method as described herein to be performed.
- a computer program product comprising instructions which, when executed by a processor, cause the processor to perform the method as described herein.
- a computer program comprising instructions which, when executed by a computer, cause the computer to perform the method as described herein.
- FIG. 1 shows a block diagram of an electronic device for image processing according to an embodiment of the present disclosure.
- FIGS. 2A to 2D show schematic diagrams of background content replacement according to an embodiment of the present disclosure.
- Fig. 3 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
- FIG. 4A shows an exemplary process of generating an image with a slow shutter photography effect according to an embodiment of the present disclosure
- FIG. 4B shows an exemplary image with a slow shutter photography effect.
- FIG. 5 illustrates a photography device according to an embodiment of the present disclosure.
- FIG. 6 shows a block diagram showing an exemplary hardware configuration of a computer system capable of implementing embodiments of the present invention.
- the slow shutter mode is often used to achieve special dynamic effects by prolonging the shutter time.
- the exposure time is long, and a little shaking during the shooting process will cause problems such as blurring of static objects in the final image.
- this will increase the burden on the photographer, and cannot effectively suppress the negative impact caused by shaking.
- the present disclosure proposes a technology for processing a group of images, especially a piece of video data or a group of images captured at a certain time (for example, equal to the shutter setting time), wherein such a group Group images are analyzed and processed for static objects and dynamic objects, and based on specific static object images and dynamic object trajectories, photos similar to slow shutter photography effects are generated.
- the solution disclosed in the present disclosure belongs to an improved image processing, which is to simulate the effect of slow shutter photography by acquiring a piece of video data or a group of images and performing image processing on the video data/images, and it is different from the camera shooting mode, such as the camera shutter mode setting, especially the slow shutter shooting mode is irrelevant.
- the camera shooting mode such as the camera shutter mode setting
- the slow shutter shooting mode the final image is generated based on optical device imaging after the shutter time is reached, that is to say, only one final image is obtained in the slow shutter shooting mode.
- a segment of video data or a group of images is processed.
- a piece of video data or a group of images can be obtained by using a camera for continuous shooting, and its shooting mode, including shutter settings, etc. can be conventional settings, and only need to properly set the length of shooting time,
- the shooting time length may, for example, be equal to the commonly used slow shutter time length or other suitable time lengths.
- this disclosure is not simply anti-shake processing in video, but by selecting, optimizing and combining the video data/a group of images obtained from shooting, even if there is shaking when shooting video/photos, even In the case that there may be jitter in the captured video/photo, high-quality photos with slow shutter photography effects can still be finally obtained.
- Fig. 1 shows a block diagram of an apparatus for image processing of a series of images according to an embodiment of the present disclosure.
- the device 10 includes a processing circuit 102 configured to generate a processed image that does not contain image content related to a dynamic object based on a particular image selected from the series of images that contains a static object; generating a motion trajectory of the dynamic object in at least a portion of the series of images; and obtaining a final image based on both the processed images and the generated motion trajectory of the dynamic object.
- a series of images are captured for a scene containing at least one of static objects and dynamic objects.
- a series of images are captured by the camera for a certain period of time. For example, it may be taken continuously, or taken at specific time intervals.
- a series of images are obtained by using a photography device to shoot for a specific period of time under non-fixed conditions.
- the shooting time period that is, the shooting duration can be appropriately set, for example, corresponding to the shutter speed setting.
- this piece of video data when a user is shooting with a handheld camera device, a piece of video data will be obtained, and this piece of video data essentially includes multi-frame data, that is, multi-frame images, which can be used as a series of images to be processed.
- the number of a series of images to be processed may be different.
- the series of images to be processed may be all the images taken in a specific time period, or a part of them, such as randomly selected images or images selected at equal intervals.
- the static objects and dynamic objects in the video data are processed separately, so various appropriate ways can be used to identify and distinguish the static objects and dynamic objects in the image collection.
- all images can be analyzed, and the static objects and dynamic objects in the entire video can be divided by comparing the pixel information of each position in the image and allowing a certain offset.
- the processing circuit is configured to identify objects present in each image of the series of images with a position shift less than a certain threshold as static objects.
- a static object should be an object that remains substantially fixed throughout the shooting process, thus appearing as an object that exists in all frame data of the video and whose position is offset within a certain error.
- This specific threshold/error can be empirical set, or obtained from data analysis in a training image set or previously captured images.
- the processing circuit is configured to identify an object that does not appear on all images in the series of images or that is present on all images in the series of images but whose position is shifted by more than a certain threshold for dynamic objects.
- a certain threshold for dynamic objects.
- dynamic objects can appear as moving objects in video data, and their position deviation will be Objects that are greater than a certain threshold, or enter or leave the video capture range during shooting, etc.
- the threshold here may be a previous specific threshold, or an additional threshold.
- the identification of static objects and dynamic objects can be performed one by one or at certain intervals in all video data.
- certain video data in the video data such as the first video data or the last video data, may be selected as a reference, and other video data are compared with it.
- the identification of static objects and dynamic objects may be performed by a processing circuit, or by a device other than the processing circuit included in the electronic device, or even by a device other than the electronic device that can acquire video data.
- the apparatus may recognize the image and provide the recognition result to a processing circuit of the electronic device.
- the processing of the static object is performed based on a specific image containing the static object selected from the series of images.
- the specific image is an image in which the static object in the series of images meets specific requirements.
- Specific requirements are related to the characteristics and needs of the captured images. Satisfying the specific requirements includes, for example, the best photographic effect, the clearest, the best facial expressions, the best colors, or even if it is not clear enough, the expression and shape of the characters are the most satisfactory. Ask to wait.
- the specific image is an image selected from a series of images that represents a static object better.
- the selected static object performance frame from each frame included in the video is used as a static object frame, and as the aforementioned specific image .
- the good performance of the static object refers to an image that satisfies specific requirements with good clarity, shape, color, etc. of the static object.
- the selected specific image is One of the best images.
- selection from a series of images may be based on a static object template/model.
- the processing circuit is further configured to compare each image in the series of images with a static object template, and select the image in the series of images that has the highest proximity to the static object template as that particular image.
- Static object templates are templates obtained for various requirements, such as templates with high definition, good expression of characters, good color performance of objects, good shape and so on.
- the closeness is the closeness between the static object in the image and the static object template with respect to at least one of clarity, character expression, shape, and color.
- the proximity may also be the proximity of other types of information, especially information related to the aforementioned specific requirements.
- the static object template/model is obtained through training, in particular, the static object template is a static object data model trained based on a training image set. In some embodiments, the static object template/model is obtained based on a pre-provided training data set. In some embodiments, at least a portion of the images obtained from each capture may be added to the training dataset. In other embodiments, the static object template/model can be updated dynamically, for example, can be retrained when the training data set changes. Changes to the training data set include, but are not limited to, for example, periodically replacing the training data set with new training data sets, periodically updating the training data set by adding additional training data, and so on.
- the processing of the static object includes processing a specific image to remove the image content of the dynamic object, so as to obtain image/frame data containing only the static object as a reference image for obtaining the final image.
- the processing circuit may be further configured to: replace content related to the dynamic object in the specific image with background content.
- dynamic objects can be removed from selected static object specific images/frames, e.g. regions of pixel content of dynamic objects can be removed from them, and this region background pixel information can be replaced with reference to pixel information at similar locations in other image/frame data Compensation, and the supplemented frame data is used as the reference image for final imaging.
- the processing circuit may be further configured to: select from the series of images whose dynamic object position does not coincide with the dynamic object position in the specific image and whose static object position does not coincide with the dynamic object position in the specific image An image with the least positional deviation of the static object; and determining the background content based on content at a position in the screened images corresponding to the position of the dynamic object in the specific image.
- the position of the static object in the screened image is consistent with the position of the static object in the specific image, then the position in the screened image corresponds to the position of the dynamic object in the specific image and has the same size Content can be used as background content.
- the position of the static object in the screened image deviates from the position of the static object in the specific image, then the position in the screened image corresponds to the position of the dynamic object in the specific image and is based on
- the content of adjusting the offset value can be used as background content.
- the dynamic object and some static objects are identified in the image data, as shown in Figure 2A, assuming that the shooting time is 2 seconds, and 30 frames are set to be shot in 1 second, assuming that the 15th frame data is selected from Specific image/frame data in video data, in which two static objects, Still 1 and Still 2, and a dynamic object are identified.
- the numbers of static objects and dynamic objects in the figures are exemplary and they may have other numbers.
- the frame data is used as reference data for later processing, as shown in FIG. 2C , for example, the 40th frame data is selected.
- the replacement based on the screened 40th frame data can be divided into two situations.
- the size and position of the static object in the data of the 40th frame is the same/consistent with the data of the 15th frame, then directly from the position of the dynamic object in the data of the 40th frame corresponding to the position of the dynamic object in the data of the 15th frame (for example, the same position ) to capture a picture of the same size and add it to the corresponding position of the 15th frame.
- the size and position of the static object in the 40th frame data and the 15th frame data will have a slight deviation. In this way, the position size deviation of the static object in the two sets of data can be calculated first.
- the processing of the dynamic object includes determining the motion condition of the dynamic object from a series of images, and in some examples, generating a motion trajectory of the dynamic object.
- a dynamic object may be an object moving in the video, or an object entering or leaving in the middle of the video. trace.
- the processing circuit is further configured to: track the movement of the dynamic object in the series of images from the image in which the dynamic object first appears to the image in which it appears last to generate a motion trajectory.
- the entire moving process of a dynamic object in a video can be tracked to generate a trajectory.
- it can be obtained by connecting the positions in each image where the dynamic object appears.
- the images used to connect lines to generate trajectories may be images greater than or equal to a set threshold number. Objects that appear in few images do not need to generate trajectories, which reduces possible disturbances in the generated images.
- the images used to connect the lines to generate the trajectory may be continuous images, such as images captured sequentially, or images that are consecutive in time of capture. In some embodiments, there may be discontinuities in the images used to wire lines to generate trajectories between parts of the images, such as may be separated by few other images, such as when a moving object is briefly occluded by other objects passing by .
- the motion trace of the dynamic object may include the motion light trace of the motion object.
- the processing circuit is further configured to: acquire dynamic objects in the Based on the optical information in the series of images from the first image of the dynamic object to the last image of the dynamic object, the motion light trace of the dynamic object is generated based on the motion track of the dynamic object and the optical information.
- the optical information includes at least one of brightness information and color information of the dynamic object in each image from the image in which the dynamic object first appears to the image in which the dynamic object appears last in the series of images. It should be noted that the optical information may also include other appropriate information, as long as the information can be used to generate the optical trajectory. In the present disclosure, various suitable ways can be used to generate the optical trajectory. In some embodiments, the optical trajectory may be generated by means of various suitable algorithms. For example, analyze the brightness information of each part of the dynamic object, and combine the principle of light and shadow graffiti to convert the trajectory of the dynamic object into a light trail.
- the processing circuitry is further configured to: analyze the optical information at locations on the dynamic object for each of the images in the series of images in which the dynamic object first appears to the image in which it last appears, To obtain the highlighted part on the dynamic object; connect the image where the dynamic object begins to appear in the series of images to the highlighted part at the same position on the dynamic object in the last image to generate the moving light of the dynamic object trace.
- the highlighted part on the dynamic object is determined based on the comparison between the brightness information of the dynamic object and the brightness information of the background part of the image.
- whether an object can be displayed in the final image depends on its exposure ratio (exposure time and object brightness information).
- exposure time and object brightness information For dynamic objects, if the brightness of the object itself is not very different from the background brightness of the road it passes through, because the object stays at the same position for a short time, the proportion of road exposure time will be much higher than that of the object, and then the final image will be taken Objects in the middle will be very faint or even not displayed.
- the brightness of the object itself (such as a car light) is much higher than the brightness of the road background, this can make up for the lack of exposure time, and the highlighted parts will be displayed in the final photo. This is also the reason why the car only leaves the trail of the headlights when shooting with a slow shutter speed, and almost no other parts are displayed.
- the motion trajectory is first obtained by tracking the dynamic object, and then each frame of data is analyzed to obtain the unit time of the road background on the motion trajectory (in the case of setting 30 frames per second, the time unit of one frame of data is 1/30 second) brightness value, combined with the exposure time to obtain the exposure ratio of the road background during shooting, and then estimate the total brightness value of the road background.
- further processing may be performed on the motion track/moving light track of the dynamic object generated above.
- optimization can be performed according to the user's shooting needs, including removing some noise points to make the generated motion trajectory/moving light trail more continuous and smooth. At the same time, a little displacement occurs, such as a temporary lane change of a car.
- the trajectory of the dynamic object can also be screened.
- the motion trajectory of the dynamic object used to generate the final graphic refers to the motion trajectory of the dynamic object appearing in more than a predetermined number of images in the series of images.
- the predetermined number can be set appropriately, for example empirically.
- the trajectories of the dynamic objects that appear for a short time in the captured video can be deleted, so that the relatively cluttered motion trajectories can be effectively avoided, and the main long-term objects can be highlighted. track.
- the final image may be generated based on the processed image generated from the static object specific image and the obtained dynamic object motion trajectory.
- the motion trajectory of the dynamic object is synthesized into the processed image to obtain the final image.
- Combining operations can be performed in various suitable ways.
- the motion trajectory is directly superimposed on the corresponding position in the specific image of the static object.
- the generated trajectory may be determined according to the position information of the generated trajectory, for example, based on the position information of the moving object in each image, and the generated trajectory may be reflected to a corresponding position in a specific image.
- the processing circuit 102 may be in the form of a general processor, or may be a special processing circuit, such as an ASIC.
- the processing circuit 120 can be configured by an electric circuit (hardware) or a central processing device such as a central processing unit (CPU).
- a program (software) for operating the circuit (hardware) or central processing device may be carried on the processing circuit 102 .
- the program can be stored in a memory such as arranged in a memory or in an external storage medium connected from the outside, and downloaded via a network such as the Internet.
- the processing circuit 102 may include various units for realizing the above-mentioned functions, such as a processing image obtaining unit 104, which is used for generating different images based on a specific image containing a static object selected from the series of images.
- a processed image containing image content related to the dynamic object a motion trajectory generation unit 106, which is used to generate a motion trajectory of the dynamic object in at least a part of the images in the series of images; and a synthesis unit 108, which is used to generate based on the Both the processing image and the generated motion trajectory of the dynamic object obtain the final image.
- Each unit can operate as described above and will not be described in detail here.
- the processing circuit 102 may further include an object recognition unit 110 for recognizing at least one of a static object and a dynamic object in an image, and may operate as described above, which will not be described in detail here.
- an object recognition unit 110 for recognizing at least one of a static object and a dynamic object in an image, and may operate as described above, which will not be described in detail here.
- the processing circuit 102 may further include a selection unit 112, which is used to select a specific image containing a static object from the series of images, which may be operated as described above and will not be described in detail here.
- a selection unit 112 which is used to select a specific image containing a static object from the series of images, which may be operated as described above and will not be described in detail here.
- the motion trajectory generation unit 106 may include an optical information acquisition unit 1062, which is used to acquire the optical information of the dynamic object in the series of images from the image where the dynamic object first appears to the image where the dynamic object appears last, and A light trace generation unit 1064, configured to generate a motion light trace of the dynamic object based on the motion track of the dynamic object and the optical information.
- optical information acquisition unit 1062 which is used to acquire the optical information of the dynamic object in the series of images from the image where the dynamic object first appears to the image where the dynamic object appears last
- a light trace generation unit 1064 configured to generate a motion light trace of the dynamic object based on the motion track of the dynamic object and the optical information.
- the object recognition unit 110, the selection unit 112, the optical information acquisition unit 1062, and the optical trace generation unit 1064 are drawn with dotted lines to illustrate that the units are not necessarily included in the processing circuit, or are not necessarily included in the processing circuit. does not exist. As an example, the unit may be located in the terminal-side electronic device but outside the processing circuit, and may even be located outside the electronic device 10 . It should be noted that although the various units are shown as separate units in FIG. 1 , one or more of these units may also be combined into one unit, or split into multiple units.
- each of the above units may be implemented as an independent physical entity, or may also be implemented by a single entity (for example, a processor (CPU or DSP, etc.), an integrated circuit, etc.).
- the above-mentioned units are shown with dotted lines in the drawings to indicate that these units may not actually exist, and the operations/functions realized by them may be realized by the processing circuit itself.
- FIG. 1 is only a schematic structural configuration of an electronic device for image processing, and the electronic device 10 may also include other possible components, such as a memory, a network interface, a controller, etc., and these components are not shown for clarity.
- a processing circuit may be associated with a memory.
- the processing circuit may be directly or indirectly (eg, other components may be connected therebetween) connected to the memory for accessing data related to image processing.
- the memory may store various data and/or information generated by the processing circuitry 102 .
- the memory may also be located within the terminal-side electronics but outside the processing circuitry, or even outside the terminal-side electronics.
- the memory can be volatile memory and/or non-volatile memory.
- memory may include, but is not limited to, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), read only memory (ROM), flash memory.
- the image processing is aimed at processing a series of images, the series of images are obtained for shooting scenes containing static objects and dynamic objects, as shown in Figure 3, in step S301 of the image processing method, based on the selected A specific image containing a static object in a series of images, generating a processed image that does not contain image content related to a dynamic object; in step S302, generating a trajectory of the dynamic object in at least a part of the images in the series of images; and In step S303, a final image is obtained based on both the processed image and the generated motion trajectory of the dynamic object.
- FIG. 4B shows the finally obtained image with slow shutter photography effect.
- the box in the lower left corner of the figure identifies the stationary car in the image, which is a static object and will be clearly displayed, and the box to the right of the box identifies the motion of a moving vehicle in the middle road area as a dynamic object light trail.
- the reference image used to synthesize the final photo is based on automatic selection from the captured video data, and even if there is jitter during the entire shooting process, and even if there is jitter in some video data, the appropriate image can still be selected conveniently and accurately , which helps to improve the final image quality.
- the image to be processed may be any suitable image, such as a raw image obtained by a photographic device, or an image that has undergone specific processing on the raw image, such as preliminary filtering, anti-aliasing, color adjustment, contrast adjustment, Normalization and more.
- the preprocessing operation may also include other types of preprocessing operations known in the art, which will not be described in detail here.
- the solutions of the present disclosure can be used in combination with various image processing techniques in existing photographic equipment. In particular, it can be used in combination with various white balances, exposure compensation, anti-shake processing, ghost compensation, etc. in the image.
- the aforementioned various processes may be performed after the final image with slow shutter photography effect is obtained through the solution of the present disclosure.
- the image captured by the user's handheld device may be subjected to the foregoing various processes, and then the process of the present disclosure may be applied to obtain a further optimized image with a slow shutter photography effect.
- the foregoing various processes may be executed synchronously during shooting.
- the technical concept of the present disclosure can preferably be applied to existing photographic equipment through hardware (such as chips, electronic components, etc.), firmware, or software, so that when the photographic equipment is not fixed, such as handheld or In other situations where the photography equipment is held in a non-fixed manner, wonderful photos with slow shutter photography effects can also be taken, which can reduce the user's burden/load in photography, simplify the user's photography operation, and do not need to use a tripod to fix the photography equipment.
- hardware such as chips, electronic components, etc.
- firmware or software
- the image processing electronic device of the present disclosure can be integrated in the photographic device, for example, integrated in the photographic device in the form of an integrated circuit or a processor, or even integrated in the existing processing circuit of the photographic device; or As a separate device, it can be detachably connected to the photographic equipment, for example, it can be used as a separate module, or it can be solidified together with the camera lens that can be detachably attached to the photographic equipment, so that even if the camera lens is replaced with other equipment, it can still be used.
- the solution of the present disclosure processes the captured image to obtain an image with slow shutter photography effect. In some embodiments, it can even be set on a remote device that can communicate with the photographic device.
- the photographic device can transmit the obtained image to the remote device after shooting, and after the remote device performs image processing Send the processed image back to the photographic device for display, or pass it to other devices for display.
- the remote device may be a device that can be connected with the photographic device to control photographing and/or display, such as a smart phone, Portable electronic devices such as tablet computers, etc., or located in such devices.
- the solution of the present disclosure can be realized by a software algorithm, so that it can be easily integrated in various types of photographic equipment, such as video cameras, cameras such as SLR cameras, mirrorless cameras, etc., and mobile phone photographic equipment .
- the method of the present disclosure may be executed by a processor of a photography device as a computer program, an instruction, etc., so as to perform image processing on a captured image.
- the photographic equipment to which the technical solutions of the present disclosure can be applied may include various types of optical photographic equipment, such as lenses installed in portable equipment, photographing devices on drones, photographing devices in monitoring equipment, etc., and so on.
- the photographing device is not limited thereto, and can also be any other suitable device, as long as the photographing device can shoot images continuously within a specific time period to obtain corresponding videos/photos.
- the present disclosure can be used in many applications.
- the invention can be used to monitor, identify, track objects in still images captured by cameras or in moving video, and is especially advantageous for camera-equipped portable devices, (camera-based) mobile phones, and the like.
- a photography device which includes an image acquisition device for acquiring a series of images, the series of images are for shooting a scene containing at least one of a static object and a dynamic object obtained, and the aforementioned apparatus for image processing, to perform image processing on a series of acquired images.
- the image acquisition device may be, for example, any appropriate device designed to obtain such a series of images, which may be implemented in various appropriate ways, for example, may include a camera, a camera device, etc. Images are obtained by shooting the scene, or images are obtained from other photographic components of the photographic equipment, or even equipment other than the photographic equipment.
- FIG. 5 shows a block diagram of a photographing device according to an embodiment of the present disclosure.
- the photography device 50 includes an image processing device 502, which can be used to process the captured image so as to obtain a photo image with a slow shutter photography effect.
- the photographic device 50 may include a lens unit 504, which may include various optical lenses known in the art, for imaging an object on a sensor through optical imaging.
- the photography device may also include an output device for outputting the photo image obtained by the image processing device and having the effect of slow shutter photography.
- the output device may be in various appropriate forms, such as a display device, or may be a communication device for outputting the photo image to other devices, such as a server, cloud and so on.
- the photographic device 50 may include a photographic filter 506, which may include various photographic filters/filters known in the art, which may be mounted to the front of the lens.
- the photography device 50 may also include a processing circuit 508, which may be used to process the obtained images. Such as various pre-processing before image processing, or various post-processing after obtaining images with slow shutter photography effects, such as noise reduction, further beautification, and so on.
- the processing circuit 508 may be in the form of a general processor, or may be a special processing circuit, such as an ASIC.
- the processing circuit 508 can be constructed by an electrical circuit (hardware) or a central processing device such as a central processing unit (CPU).
- the processing circuit 508 may carry a program (software) for operating a circuit (hardware) or a central processing device.
- the program can be stored in a memory such as arranged in a memory or in an external storage medium connected from the outside, and downloaded via a network such as the Internet.
- At least one of the lens unit 504, photographic filter 506 and processing circuit 508 may be included in an image acquisition device. It should be noted that although not shown, the image acquisition device may also include other components as long as the image to be processed can be obtained.
- photographic filter and processing circuitry are drawn with dotted lines, in order to illustrate that this unit is not necessarily included in the photographic device 50, but may even be connected and/or external to the photographic device 50 in a known manner. communication. It should be noted that although each unit is shown as a separate unit in FIG. 5 , one or more of these units may also be combined into one unit, or split into multiple units.
- FIG. 6 is a block diagram showing an example structure of a personal computer of an information processing device employable in an embodiment of the present disclosure.
- the personal computer may correspond to the above-mentioned exemplary transmitting device or terminal-side electronic device according to the present disclosure.
- a central processing unit (CPU) 601 executes various processes according to programs stored in a read only memory (ROM) 602 or loaded from a storage section 608 to a random access memory (RAM) 603 .
- ROM read only memory
- RAM random access memory
- data required when the CPU 601 executes various processing and the like is also stored as necessary.
- the CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604.
- the input/output interface 605 is also connected to the bus 604 .
- the following components are connected to the input/output interface 605: an input section 606 including a keyboard, a mouse, etc.; an output section 607 including a display such as a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and a speaker; a storage section 608 , including a hard disk, etc.; and the communication part 609, including a network interface card such as a LAN card, a modem, and the like.
- the communication section 609 performs communication processing via a network such as the Internet.
- a driver 610 is also connected to the input/output interface 605 as needed.
- a removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read therefrom is installed into the storage section 608 as necessary.
- the programs constituting the software are installed from a network such as the Internet or a storage medium such as the removable medium 611 .
- a storage medium is not limited to the removable medium 611 shown in FIG. 6 in which the program is stored and distributed separately from the device to provide the program to the user.
- the removable media 611 include magnetic disks (including floppy disks (registered trademark)), optical disks (including compact disk read only memory (CD-ROM) and digital versatile disks (DVD)), magneto-optical disks (including )) and semiconductor memory.
- the storage medium may be a ROM 602, a hard disk contained in the storage section 608, etc., in which the programs are stored and distributed to users together with devices containing them.
- the methods and systems of the present invention can be implemented in a variety of ways.
- the methods and systems of the present invention may be implemented by software, hardware, firmware, or any combination thereof.
- the sequence of steps of the method described above is illustrative only, and unless specifically stated otherwise, the steps of the method of the present invention are not limited to the sequence specifically described above.
- the present invention can also be embodied as a program recorded in a recording medium, including machine-readable instructions for implementing the method according to the present invention. Therefore, the present invention also covers a recording medium storing a program for implementing the method according to the present invention.
- Such storage media may include, but are not limited to, floppy disks, optical disks, magneto-optical disks, memory cards, memory sticks, and the like.
- embodiments of the present disclosure may also include the following illustrative examples (EE).
- An electronic device for processing a series of images obtained for a scene containing at least one of a static object and a dynamic object comprising processing circuitry configured to :
- a final image is obtained based on both the processed image and the generated motion trajectory of the dynamic object.
- EE 2 The electronic device according to EE 1, wherein the series of images are obtained by using a photography device to shoot for a specific period of time when the photography device is not fixed.
- EE 3 The electronic device according to EE 1, wherein the series of images are obtained by using a photographic device to shoot for a specific period of time.
- EE 4 The electronic device according to EE 1, wherein the specific image is an image in which a static object in the series of images meets specific requirements.
- EE 6 The electronic device according to EE 5, wherein the proximity is the proximity of at least one of clarity, character expression, shape, and color between the static object in the image and the static object template.
- EE 7 The electronic device according to EE 5, wherein the static object template is a static object data model trained based on a training image set.
- EE 8 The electronic device according to EE 1, wherein the processing circuit is further configured as:
- EE 9 The electronic device according to any one of EE 1-8, wherein the processing circuit is further configured to:
- the motion light trace of the dynamic object is generated.
- optical information comprises brightness information of the dynamic object in each image from the image in which the dynamic object begins to appear to the image in which the dynamic object appears last in the series of images and at least one of color information.
- EE 11 The electronic device according to EE 1, wherein the processing circuit is further configured as:
- the image in the series of images starting from the dynamic object is connected to the highlighted part at the same position on the dynamic object in the last image to generate the motion light trace of the dynamic object.
- EE 12 The electronic device according to EE 11, wherein the highlighted part on the dynamic object is determined based on the comparison between the brightness information of the dynamic object and the brightness information of the background part of the image.
- EE 13 The electronic device according to EE 1, wherein the motion trajectory of the dynamic object used to generate the final graphics comprises the motion trajectory of the dynamic object appearing in more than a predetermined number of images in the series of images.
- EE 14 The electronic device according to any one of EE 1-13, wherein the processing circuit is further configured to:
- Background content is determined based on content in the screened images at locations corresponding to dynamic object locations in the particular image.
- EE 16 The electronic device according to EE 15, wherein, in the case where the position of the static object in the screened image is consistent with the position of the static object in the specific image, the position in the screened image is consistent with the specific image
- the content corresponding to the position of the dynamic object in the image and having the same size can be used as the background content.
- EE 17 The electronic device according to EE 15, wherein, in case the position of the static object in the screened image deviates from the position of the static object in the specific image, the position in the screened image is different from the position of the static object in the specific image
- the content corresponding to the position of the dynamic object in a specific image and adjusted based on the offset value can be used as the background content.
- EE 18 The electronic device according to any one of EE 1-17, wherein the processing circuit is further configured to:
- Objects that are present in each image of the series of images and whose position shifts are less than a certain threshold are identified as static objects.
- EE 19 The electronic device according to any one of EE 1-18, wherein the processing circuit is further configured to:
- Objects that do not appear on all images in the series of images or that appear on all images in the series of images but with a position shift greater than a certain threshold are identified as dynamic objects.
- An image processing method for processing a series of images acquired for a scene containing static objects and dynamic objects comprising:
- a final image is obtained based on both the processed image and the generated motion trajectory of the dynamic object.
- EE 21 The method according to EE 20, wherein the series of images are obtained by taking photographs for a specific period of time using a photographic device when the photographic device is not fixed.
- EE 22 The method according to EE 20, wherein the series of images are obtained by photographing for a specific period of time with a photographic device.
- EE 23 The method according to EE 20, wherein the specific image is an image in which a static object in the series of images meets specific requirements.
- EE 25 The method according to EE 24, wherein the proximity is the proximity between the static object in the image and the static object template with respect to at least one of clarity, facial expression, form, and color.
- EE 26 The method according to EE 24, wherein the static object template is a static object data model trained based on a training image set.
- EE 28 The method according to any one of EE 20-27, wherein said generating a motion trajectory of a dynamic object in at least a part of the images in the series of images further comprises:
- the motion light trace of the dynamic object is generated.
- optical information comprises brightness information and color of the dynamic object in each image in the series of images from the image in which the dynamic object first appeared to the image in which the dynamic object first appeared at least one of the information.
- the image in the series of images starting from the dynamic object is connected to the highlighted part at the same position on the dynamic object in the last image to generate the moving light trace of the dynamic object.
- EE 31 The method according to EE 30, wherein the highlighted part on the dynamic object is determined based on a comparison between the brightness information of the dynamic object and the brightness information of the background part of the image.
- EE 32 The method according to EE 20, wherein the trajectory of the dynamic object used to generate the final graphics comprises the trajectory of the dynamic object appearing in more than a predetermined number of images in the series of images.
- EE 33 The method according to any one of EE 20-32, wherein generating a processed image that does not contain dynamic object-related image content based on a particular image selected from the series of images containing a static object comprises:
- the motion track of the dynamic object is synthesized with the replaced specific image to generate the final image.
- Background content is determined based on content in the screened images at locations corresponding to dynamic object locations in the particular image.
- EE 35 The method according to EE 34, wherein the position of the static object in the screened image coincides with the position of the static object in the specific image,
- the content corresponding to the dynamic object position and the same size in the can be used as the background content.
- EE 36 The method according to EE 34, wherein, in the event that the position of the static object in the screened image deviates from the position of the static object in the specific image, the position in the screened image deviates from the position of the specific image.
- the content corresponding to the position of the dynamic object in the image and adjusted based on the deviation value can be used as the background content.
- EE 37 The method according to any one of EE 20-36, wherein the method further comprises:
- Objects that are present in each image of the series of images and whose position shifts are less than a certain threshold are identified as static objects.
- EE 38 The method according to any one of EE 20-36, wherein the method further comprises:
- Objects that do not appear on all images in the series of images or that appear on all images in the series of images but with a position shift greater than a certain threshold are identified as dynamic objects.
- a photographic device comprising:
- image acquisition means for acquiring a series of images obtained for shooting a scene containing at least one of a static object and a dynamic object, and;
- At least one storage device stores instructions thereon, which instructions, when executed by the at least one processor, cause the at least one processor to perform according to any one of EE 20-38 image processing method.
- EE 41 A storage medium storing instructions which, when executed by a processor, enable execution of the image processing method according to any one of EE20-38.
- EE 42 A computer program product comprising instructions which, when executed by a processor, enable execution of the image processing method according to any one of EE 20-38.
- EE 42 A computer program comprising instructions which, when executed by a computer, enable execution of the image processing method according to any one of EE 20-38.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (42)
- 一种用于处理一系列图像的电子设备,所述一系列图像是针对包含静态物体和动态物体中至少一者的场景拍摄而获得的,所述电子设备包括处理电路,被配置为:基于选自所述一系列图像中的包含静态物体的特定图像,生成不包含动态物体有关的图像内容的处理图像;生成动态物体在所述一系列图像中的至少一部分图像中的运动轨迹;以及基于所述处理图像和所生成的动态物体的运动轨迹两者获得最终图像。
- 根据权利要求1所述的电子设备,其中,所述一系列图像是利用摄影设备在摄影设备非固定的情况下进行特定时间段的拍摄而获得的。
- 根据权利要求1所述的电子设备,其中,所述一系列图像是利用摄影设备进行特定时间段的拍摄而获得的。
- 根据权利要求1所述的电子设备,其中所述特定图像是所述一系列图像中的静态物体满足特定要求的图像。
- 根据权利要求4所述的电子设备,其中,所述电子设备进一步配置为:将所述一系列图像中的每一图像与静态物体模板进行比较,并且选择所述一系列图像中的与所述静态物体模板的接近度最高的图像作为所述一系列图像中的静态物体满足特定要求的的图像。
- 根据权利要求5所述的电子设备,其中,所述接近度是图像中的静态物体与静态物体模板之间的关于清晰度、人物表情、形态、色彩中的至少一者的接近度。
- 根据权利要求5所述的电子设备,其中,所述静态物体模板是基于训练图像集合被训练得到的静态物体数据模型。
- 根据权利要求1所述的电子设备,其中,所述处理电路进一步配置为:跟踪动态物体在所述一系列图像中的从该动态物体开始出现的图像到其最后出现的图像中的移动过程以生成运动轨迹。
- 根据权利要求1或8所述的电子设备,其中,所述处理电路进一步配置为:获取动态物体在所述一系列图像中的从该动态物体开始出现的图像到其最后出现的图像中的光学信息,基于动态物体的运动轨迹和所述光学信息,生成动态物体的运动光迹。
- 根据权利要求9所述的电子设备,其中,所述光学信息包括所述一系列图像中的从动态物体开始出现的图像到动态物体最后出现的图像中的每一个图像中动态物体的亮度信息和颜色信息中的至少之一。
- 根据权利要求1-10中任一项所述的电子设备,其中,所述处理电路进一步配置为:对于所述一系列图像中的从动态物体开始出现的图像到动态物体最后出现的图像中的每一个图像,分析动态物体上各位置处的光学信息,以获取动态物体上的高亮部位;将所述一系列图像中的从动态物体开始出现的图像到动态物体最后出现的图像中动态物体上的同一位置处的高亮部位进行连接,生成动态物体的运动光迹。
- 根据权利要求11所述的电子设备,其中,动态物体上的高亮部位是基于动态物体的亮度信息与图像背景部分的亮度信息进行比较而被确定的。
- 根据权利要求1所述的电子设备,其中,用于生成最终图像的动态物体的运动轨迹包括在所述一系列图像中的超过预定数量的图像中出现的动态物体的运动轨迹。
- 根据权利要求1-13中任一项所述的电子设备,其中所述处理电路进一步配置为:将所述特定图像中的动态物体相关的图像内容替换为背景内容以去除该特定图 像中的动态物体有关的图像内容;并且将动态物体的运动轨迹与被替换之后的特定图像进行合成以生成最终图像。
- 根据权利要求14所述的电子设备,其中,所述处理电路进一步配置为:从所述一系列图像中筛选出其动态物体位置与所述特定图像中的动态物体位置不重合且其静态物体位置与所述特定图像中的静态物体位置偏差最小的图像;并且基于所筛选出的图像中的与所述特定图像中的动态物体位置相对应的位置处的内容确定背景内容。
- 根据权利要求15所述的电子设备,其中,在所筛选的图像中的静态物体位置与所述特定图像中的静态物体位置一致的情况下,所筛选的图像中的位置与所述特定图像中的动态物体位置相对应且大小相同的内容被确定为背景内容。
- 根据权利要求15所述的电子设备,其中,在所筛选的图像中的静态物体位置与所述特定图像中的静态物体位置存在偏差的情况下,所筛选的图像中的位置与所述特定图像中的动态物体位置相对应且基于偏差值进行调整的内容被确定为背景内容。
- 根据权利要求1-17中任一项所述的电子设备,其中,所述处理电路进一步配置为:将所述一系列图像中的每一图像中均存在的且位置偏移小于特定阈值的物体识别为静态物体。
- 根据权利要求1-18中任一项所述的电子设备,其中,所述处理电路进一步配置为:将没有在所述一系列图像中的全部图像上出现的或者在所述一系列图像中的全部图像上出现但是位置偏移大于特定阈值的物体识别为动态物体。
- 一种用于处理一系列图像的图像处理方法,所述一系列图像是针对包含静态物体和动态物体的场景拍摄而获得的,所述图像处理方法包括:基于选自所述一系列图像中的包含静态物体的特定图像,生成不包含动态物体有关的图像内容的处理图像;生成动态物体在所述一系列图像中的至少一部分图像中的运动轨迹;以及基于所述处理图像和所生成的动态物体的运动轨迹两者获得最终图像。
- 根据权利要求20所述的方法,其中,所述一系列图像是利用摄影设备在摄影设备非固定的情况下进行特定时间段的拍摄而获得的。
- 根据权利要求20所述的方法,其中,所述一系列图像是利用摄影设备进行特定时间段的拍摄而获得的。
- 根据权利要求20所述的方法,其中所述特定图像是所述一系列图像中的静态物体满足特定要求的图像。
- 根据权利要求23所述的方法,其中,所述方法进一步配置为:将所述一系列图像中的每一图像与静态物体模板进行比较,并且选择所述一系列图像中的与所述静态物体模板的接近度最高的图像作为所述一系列图像中的静态物体满足特定要求的图像。
- 根据权利要求24所述的方法,其中,所述接近度是图像中的静态物体与静态物体模板之间的关于清晰度、人物表情、形态、色彩中的至少一者的接近度。
- 根据权利要求24所述的方法,其中,所述静态物体模板是基于训练图像集合被训练得到的静态物体数据模型。
- 根据权利要求20-26中任一项所述的方法,其中,所述生成动态物体在所述一系列图像中的至少一部分图像中的运动轨迹包括:跟踪动态物体在所述一系列图像中的从该动态物体开始出现的图像到该动态物体最后出现的图像中的移动过程以生成运动轨迹。
- 根据权利要求20-27中任一项所述的方法,其中,所述生成动态物体在所述一系列图像中的至少一部分图像中的运动轨迹还包括:获取动态物体在所述一系列图像中的从该动态物体开始出现的图像到该动态物体最后出现的图像中的光学信息,基于动态物体的运动轨迹和所述光学信息,生成动态物体的运动光迹。
- 根据权利要求28所述的方法,其中,所述光学信息包括所述一系列图像中的从动态物体开始出现的图像到动态物体最后出现的图像中的每一个图像中动态物体的亮度信息和颜色信息中的至少之一。
- 根据权利要求20-29中任一项所述的方法,其中,所述生成动态物体在所述一系列图像中的至少一部分图像中的运动轨迹还包括:对于所述一系列图像中的从动态物体开始出现的图像到动态物体最后出现的图像中每一个图像,分析动态物体上各位置处的光学信息,以获取动态物体上的高亮部位;将所述一系列图像中的从动态物体开始出现的图像到动态物体最后出现的图像中动态物体上的同一位置处的高亮部位进行连接,生成动态物体的运动光迹。
- 根据权利要求30所述的方法,其中,动态物体上的高亮部位是基于动态物体的亮度信息与图像背景部分的亮度信息进行比较而被确定的。
- 根据权利要求20所述的方法,其中,用于生成最终图像的动态物体的运动轨迹包括在所述一系列图像中的超过预定数量的图像中出现的动态物体的运动轨迹。
- 根据权利要求20-32中任一项所述的方法,其中基于选自所述一系列图像中的包含静态物体的特定图像,生成不包含动态物体有关的图像内容的处理图像包括:将所述特定图像中的动态物体相关的图像内容替换为背景内容以去除该特定图像中的动态物体有关的图像内容,其中,将动态物体的运动轨迹与被替换之后的特定图像进行合成以生成最终图像。
- 根据权利要求33所述的方法,其中,将所述特定图像中的动态物体相关的图像内容替换为背景内容包括:从所述一系列图像中筛选出其动态物体位置与所述特定图像中的动态物体位置不重合且其静态物体位置与所述特定图像中的静态物体位置偏差最小的图像;并且基于所筛选的图像中的与所述特定图像中的动态物体位置相对应的位置处的内容确定背景内容。
- 根据权利要求34所述的方法,其中,在所筛选的图像中的静态物体位置与所述特定图像中的静态物体位置一致的情况下,所筛选的图像中的位置与所述特定图像中的动态物体位置相对应且大小相同的内容被确定为背景内容。
- 根据权利要求34所述的方法,其中,在所筛选的图像中的静态物体位置与所述特定图像中的静态物体位置存在偏差的情况下,所筛选的图像中的位置与所述特定图像中的动态物体位置相对应且基于偏差值进行调整的内容被确定为背景内容。
- 根据权利要求20-36中任一项所述的方法,其中,所述方法还包括:将所述一系列图像中的每一图像中均存在的且位置偏移小于特定阈值的物体识别为静态物体。
- 根据权利要求20-37中任一项所述的方法,其中,所述方法还包括:将没有在所述一系列图像中的全部图像上出现的或者在所述一系列图像中的全部图像上出现但是位置偏移大于特定阈值的物体识别为动态物体。
- 一种摄影设备,包括:图像获取装置,其用于获取一系列图像,所述一系列图像是针对包含静态物体和动态物体中至少一者的场景拍摄而获得的,以及;根据权利要求1-19中任一项所述的电子设备,用于对所获取的一系列图像进行图像处理。
- 一种设备,包括至少一个处理器;和至少一个存储设备,所述至少一个存储设备在其上存储指令,该指令在由所述至少一个处理器执行时,使所述至少一个处理器执行根据权利要求20-38中任一项所述的图像处理方法。
- 一种存储指令的存储介质,该指令在由处理器执行时能使得执行根据权利要求20-38中任一项所述的图像处理方法。
- 一种计算机程序产品,所述计算机程序产品包含指令,该指令在由处理器执行时能使得执行根据权利要求20-38中任一项所述的图像处理方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22845272.8A EP4375925A1 (en) | 2021-07-19 | 2022-07-18 | Photographic image processing method and device |
CN202280049371.1A CN117642771A (zh) | 2021-07-19 | 2022-07-18 | 摄影图像处理方法及设备 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110812518.5 | 2021-07-19 | ||
CN202110812518.5A CN115701869A (zh) | 2021-07-19 | 2021-07-19 | 摄影图像处理方法及设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023001107A1 true WO2023001107A1 (zh) | 2023-01-26 |
Family
ID=84980011
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/106268 WO2023001107A1 (zh) | 2021-07-19 | 2022-07-18 | 摄影图像处理方法及设备 |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP4375925A1 (zh) |
CN (2) | CN115701869A (zh) |
TW (1) | TW202403665A (zh) |
WO (1) | WO2023001107A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117651148A (zh) * | 2023-11-01 | 2024-03-05 | 广东联通通信建设有限公司 | 一种物联网终端管控方法 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9836831B1 (en) * | 2014-07-30 | 2017-12-05 | Google Inc. | Simulating long-exposure images |
CN110166700A (zh) * | 2018-02-13 | 2019-08-23 | 奥多比公司 | 创建选择性虚拟长曝光图像 |
CN111800581A (zh) * | 2020-07-09 | 2020-10-20 | Oppo广东移动通信有限公司 | 图像生成方法、图像生成装置、存储介质与电子设备 |
CN111932587A (zh) * | 2020-08-03 | 2020-11-13 | Oppo广东移动通信有限公司 | 图像处理方法和装置、电子设备、计算机可读存储介质 |
CN112887623A (zh) * | 2021-01-28 | 2021-06-01 | 维沃移动通信有限公司 | 图像生成方法、装置及电子设备 |
-
2021
- 2021-07-19 CN CN202110812518.5A patent/CN115701869A/zh active Pending
-
2022
- 2022-07-15 TW TW111126682A patent/TW202403665A/zh unknown
- 2022-07-18 WO PCT/CN2022/106268 patent/WO2023001107A1/zh active Application Filing
- 2022-07-18 CN CN202280049371.1A patent/CN117642771A/zh active Pending
- 2022-07-18 EP EP22845272.8A patent/EP4375925A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9836831B1 (en) * | 2014-07-30 | 2017-12-05 | Google Inc. | Simulating long-exposure images |
CN110166700A (zh) * | 2018-02-13 | 2019-08-23 | 奥多比公司 | 创建选择性虚拟长曝光图像 |
CN111800581A (zh) * | 2020-07-09 | 2020-10-20 | Oppo广东移动通信有限公司 | 图像生成方法、图像生成装置、存储介质与电子设备 |
CN111932587A (zh) * | 2020-08-03 | 2020-11-13 | Oppo广东移动通信有限公司 | 图像处理方法和装置、电子设备、计算机可读存储介质 |
CN112887623A (zh) * | 2021-01-28 | 2021-06-01 | 维沃移动通信有限公司 | 图像生成方法、装置及电子设备 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117651148A (zh) * | 2023-11-01 | 2024-03-05 | 广东联通通信建设有限公司 | 一种物联网终端管控方法 |
Also Published As
Publication number | Publication date |
---|---|
EP4375925A1 (en) | 2024-05-29 |
CN117642771A (zh) | 2024-03-01 |
TW202403665A (zh) | 2024-01-16 |
CN115701869A (zh) | 2023-02-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10372991B1 (en) | Systems and methods that leverage deep learning to selectively store audiovisual content | |
US7916177B2 (en) | Image-capturing apparatus, image-capturing method and program for detecting and correcting image blur | |
WO2020057198A1 (zh) | 图像处理方法、装置、电子设备及存储介质 | |
CN110072052B (zh) | 基于多帧图像的图像处理方法、装置、电子设备 | |
US10547779B2 (en) | Smart image sensor having integrated memory and processor | |
US8810673B2 (en) | Composition determination device, composition determination method, and program | |
US10170157B2 (en) | Method and apparatus for finding and using video portions that are relevant to adjacent still images | |
US10764496B2 (en) | Fast scan-type panoramic image synthesis method and device | |
JP2021530911A (ja) | 夜景撮影方法、装置、電子機器および記憶媒体 | |
WO2020207261A1 (zh) | 基于多帧图像的图像处理方法、装置、电子设备 | |
WO2020038087A1 (zh) | 超级夜景模式下的拍摄控制方法、装置和电子设备 | |
US20120057786A1 (en) | Image processing apparatus, image processing method, image pickup apparatus, and storage medium storing image processing program | |
WO2021082883A1 (zh) | 主体检测方法和装置、电子设备、计算机可读存储介质 | |
US9307148B1 (en) | Video enhancement techniques | |
WO2020253618A1 (zh) | 一种视频抖动的检测方法及装置 | |
JP2008141437A (ja) | 撮影装置、画像処理装置、および、これらにおける画像処理方法ならびに当該方法をコンピュータに実行させるプログラム | |
JP2009147727A (ja) | 撮像装置及び画像再生装置 | |
WO2019240988A1 (en) | Camera area locking | |
JP2008217785A (ja) | 表示コントローラおよび画像データ変換方法 | |
US20180308241A1 (en) | Method, system and apparatus for detecting a change in angular position of a camera | |
WO2023001107A1 (zh) | 摄影图像处理方法及设备 | |
JP2010193476A (ja) | 撮像装置及び画像再生装置 | |
CN113014817A (zh) | 高清高帧视频的获取方法、装置及电子设备 | |
US20150350547A1 (en) | Scene Stability Detection | |
US20240196091A1 (en) | Image capturing device and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22845272 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280049371.1 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022845272 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022845272 Country of ref document: EP Effective date: 20240219 |