WO2021129669A1 - Image processing method and system, electronic device, and computer-readable medium - Google Patents

Image processing method and system, electronic device, and computer-readable medium Download PDF

Info

Publication number
WO2021129669A1
WO2021129669A1 PCT/CN2020/138652 CN2020138652W WO2021129669A1 WO 2021129669 A1 WO2021129669 A1 WO 2021129669A1 CN 2020138652 W CN2020138652 W CN 2020138652W WO 2021129669 A1 WO2021129669 A1 WO 2021129669A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target
current
buffer queue
current frame
Prior art date
Application number
PCT/CN2020/138652
Other languages
French (fr)
Chinese (zh)
Inventor
姚坤
Original Assignee
RealMe重庆移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by RealMe重庆移动通信有限公司 filed Critical RealMe重庆移动通信有限公司
Publication of WO2021129669A1 publication Critical patent/WO2021129669A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Definitions

  • the embodiments of the present application relate to the field of image processing, and more specifically, to an image processing method, an image processing system, an electronic device, and a computer-readable medium.
  • the terminal device provides an anti-shake function.
  • the existing anti-shake functions generally include optical anti-shake and algorithmic anti-shake.
  • the optical image stabilization requires the addition of optical components to the terminal equipment, resulting in an increase in the volume and weight of the terminal equipment.
  • Algorithmic image stabilization generally needs to crop the image, resulting in a loss of field of view, and when the algorithmic image stabilization is large in jitter, the cropped image cannot well compensate for the effect of jitter, which is manifested as the anti-shake effect is not obvious.
  • the embodiments of the present application provide an image processing method, an image processing system, an electronic device, and a computer-readable medium, which are beneficial to improve the definition of images captured by a terminal device.
  • an image processing method includes: in response to a first trigger operation, collecting a current frame image of a target scene; Each cached image in the cache queue is evaluated for sharpness to obtain the sharpness value of each of the cached images; the cached image with the largest sharpness value is extracted as a key frame image, and the key frame image is compared with the current frame image Performing matching; when the key frame image is successfully matched with the current frame image, fusion processing is performed on the key frame image and the current frame image to generate a target image.
  • an image processing system in a second aspect, includes: a first trigger operation response module for collecting a current frame target image of a target scene in response to the first trigger operation; an image buffer queue reading module for Read the image buffer queue at the current moment, and use preset rules to evaluate the definition of each buffer image in the image buffer queue at the current moment to obtain the definition value of each buffer image; the matching module is used to extract the definition The buffer image with the largest value is used as a key frame image, and the key frame image is matched with the current frame image; a target image output module is used to successfully match the key frame image with the current frame target image At this time, fusion processing is performed on the key frame image and the current frame target image to generate a target image.
  • an electronic device including: one or more processors; a storage device, used to store one or more programs, when the one or more programs are executed by the one or more processors At this time, the one or more processors are caused to execute the method in the above-mentioned first aspect.
  • a computer-readable medium for storing computer software instructions used to execute the method in the above-mentioned first aspect, which contains the programs designed to execute the above-mentioned aspects.
  • Fig. 1 shows a schematic diagram of an image processing method according to an embodiment of the present application.
  • Fig. 2 shows a schematic diagram of another image processing method according to an embodiment of the present application.
  • FIG. 3 shows a schematic diagram of an image processing method with multiple camera modules according to an embodiment of the present application.
  • Fig. 4 shows a schematic block diagram of an image processing system according to an embodiment of the present application.
  • Fig. 5 shows a schematic block diagram of a computer system of an electronic device according to an embodiment of the present application.
  • Anti-shake methods are mainly divided into two categories, including optical anti-shake and algorithm anti-shake.
  • OIS optical image stabilization
  • Algorithmic anti-shake is also called electronic anti-shake technology. This technology uses dynamic vectors in jitter detection. According to the motion vector to grasp the image swing direction and amount, use this as a reference to move the image position in parallel to generate a motion image without jitter.
  • the backup image is used to compensate according to the gyroscope information to ensure the stability of the picture, but the algorithm Anti-shake will often crop the image, and the field of view will have a certain loss, the general loss is more than 10%, and the algorithm anti-shake when the jitter amplitude is large, the cropped image cannot well compensate for the impact of jitter, and it is shown as anti-shake.
  • the shaking effect is not obvious.
  • anti-shake technology generally only realizes preview and video anti-shake functions.
  • Fig. 1 shows a schematic diagram of an image processing method according to an embodiment of the present application. As shown in Figure 1, the image processing method includes some or all of the following:
  • S12 Read the image buffer queue at the current moment, and perform sharpness evaluation on each buffer image in the image buffer queue at the current moment to obtain the sharpness value of each buffer image;
  • the above-mentioned image processing method can be applied to electronic devices equipped with at least one camera module, such as terminal devices such as mobile phones and tablet computers.
  • the terminal device may be equipped with multiple camera modules, for example, one or more of a main camera module, a macro camera module, a wide-angle camera module, and a depth camera module.
  • the above-mentioned first trigger operation may be a user's shooting operation in a camera or other third-party shooting application.
  • the user's first trigger operation the current frame image currently collected by the camera module is acquired.
  • a control instruction can be generated, and the update of the graphics cache queue can be stopped according to the control instruction.
  • the image buffer queue can be read, and the definition of each buffer image in the image buffer queue at the current moment can be evaluated by using a preset algorithm to obtain the definition evaluation value of each buffer image.
  • the current frame image and the cached image can be stored in different cache areas respectively. For example, the current frame image can be stored in the cache buffer, and the image cache queue can be stored in the capture buffer.
  • an evaluation method based on an energy gradient function can be used to calculate the sharpness of the cached image.
  • the formula can include:
  • f(x, y) represents the gray value of the pixel (x, y) corresponding to the image f
  • D(f) is the result of the image definition calculation.
  • the sharpness evaluation result value of each frame of the buffer queue can be obtained.
  • a frame of the buffer image with the highest definition data may be selected as the key frame image.
  • the key frame image is matched with the current frame image taken by the user.
  • SIFT Scale-invariant feature transform, scale-invariant feature transform
  • the SIFT algorithm can include the following steps: 1) Scale space extreme value detection: Search for image positions on all scales, and identify potential points of interest that are invariant to scale and rotation through Gaussian differential functions; 2) Key point positioning : At the location of each candidate, a fine-fitting model is used to determine the location and scale; the selection of key points is based on their degree of stability; 3) Direction determination: Based on the local gradient direction of the image, it is assigned to each key Point position in one or more directions; all subsequent operations on the image data are transformed relative to the direction, scale and position of the key points, so as to provide invariance to these transformations; 4) Key point description: at each key point In the surrounding neighborhood, the local gradients of the image are measured at selected scales; these gradients are transformed into a representation that allows relatively large local shape deformation and illumination changes.
  • the corresponding feature vector is generated. Based on the feature vector of each feature point in the image, the feature point description vector of the entire image can be obtained. Based on the description vector, the current frame image and the key frame image are matched.
  • the threshold for image matching may be preset.
  • the preset threshold it can be determined that the matching is successful.
  • the key frame image and the current frame image can be fused, so that the fused image is used as the target image and displayed to the user.
  • image fusion can use a variety of algorithms, such as a fusion algorithm based on pyramid transformation, a fusion method based on weighted average, or a fusion method based on PCNN (Pulse Coupled Neural Network), and so on.
  • a fusion algorithm based on pyramid transformation such as a fusion algorithm based on pyramid transformation, a fusion method based on weighted average, or a fusion method based on PCNN (Pulse Coupled Neural Network), and so on.
  • PCNN Pulse Coupled Neural Network
  • the fusion operation is not performed, and the current frame image is provided to the user as the target image.
  • a threshold may be pre-configured for the sharpness of the cached image, and if the sharpness value of the key frame image is less than the preset threshold, it indicates that the sharpness of the cached image is low. Then the current frame image can be directly provided to the user as the target image.
  • the image cache queue you can also select the top n cache images with the highest sharpness value to match with the current frame image, and then select the cache image with the highest matching degree as the final key frame image, and then The key frame image is fused with the current frame image.
  • the display control method provided in the embodiments of the present application selects the buffer image with the largest sharpness value as the key frame image to match the current frame image, and then performs image fusion according to the matching result to generate the target image.
  • the target image can be combined with the features of the current frame image and the key frame image, effectively avoiding unclear images caused by jitter.
  • Fig. 2 shows a schematic diagram of an image processing method according to an embodiment of the present application. As shown in Figure 2, the image processing method includes some or all of the following:
  • S12 Read the image buffer queue at the current moment, and perform sharpness evaluation on each buffer image in the image buffer queue at the current moment to obtain the sharpness value of each buffer image;
  • the above-mentioned second trigger operation may be a control operation of the user opening a camera application or another third-party shooting application and entering the shooting preview interface.
  • an image buffer queue can be established, for example, a RAW data frame queue can be established.
  • start to continuously collect the current image in the current scene and save the continuously collected current image to the image buffer queue.
  • the length of the buffer queue can be configured in advance, for example, the length of the queue can be set to a value such as 8, 10, or 15.
  • Multiple current images acquired continuously can be buffered in the RAW data frame queue. And configure an independent storage space in the terminal device for storing cached images.
  • the current image collected is overwritten with the previous buffer image stored in the image buffer queue. For example, if the length of the RAW data frame queue is 10 and the current queue has cached images M1, M2...M9, M10, when M11 is collected, this M11 will cover M1, M12 will cover M2, and so on. As a result, when the user takes a photo, each buffered image in the RAW data frame queue is the image corresponding to the time closest to the time the user takes the photo.
  • the display control method provided in the embodiments of the present application establishes an image buffer queue in advance, so that when the terminal device takes a picture in response to the first trigger operation of the user, it can acquire the current frame image while simultaneously buffering each image in the image buffer queue.
  • the image is evaluated for sharpness to obtain the corresponding sharpness value.
  • the foregoing method may further include:
  • Step S21 Activate at least two of the camera components in response to the second triggering operation, collect the current images of the target scene through the two camera components, and save them in the corresponding image buffer queue;
  • Step S22 performing scene recognition on the current image to obtain the scene category of the target scene
  • Step S23 Determine the corresponding target camera component and the target image cache queue corresponding to the target camera component according to the scene category, so as to read the target image cache queue when responding to the first trigger request.
  • multiple camera modules can be activated at the same time. For example, start the main camera module and the wide-angle camera module at the same time. At this time, corresponding image buffer queues can be created for the main camera module and the wide-angle camera module respectively.
  • the scene corresponding to the current frame can be identified; for example, a macro shooting scene, a wide-angle shooting scene, a night shooting scene, or a sports shooting scene.
  • the user can pre-configure the correspondence between the shooting scene and the target camera component.
  • the corresponding shooting scene is determined, the corresponding target camera component can be determined, and the image buffer queue corresponding to the target camera component can be extracted. Then, the definition of the buffered image in the image buffering queue is evaluated, and the other buffering queues are cleared. As a result, the finally generated fused image can have a better display effect.
  • the above-mentioned image buffer queue may also include at least one frame of the current frame of the target image after the acquisition of the buffer image.
  • the user can continue to collect the buffered image at time t+1 and save it in the image buffer queue.
  • the current frame image collected by the user at time t may be a frame of image in the image buffer queue.
  • the current shooting posture and the content of the shot scene may still be maintained. At this point, you can still collect one or more frames that are continuous with the current frame of image. Therefore, if the definition of each frame buffer image before the current frame image in the image buffer queue is not high, the multiple frames after the current frame image can still be used for matching and fusion. In this way, the situation where there is no available buffer image in the image buffer queue or the definition is low can be avoided to the greatest extent.
  • the image processing method of the embodiment of the present application establishes an image cache queue corresponding to each camera component when the user enters the shooting function, and stores the cache image corresponding to the current scene in the image cache queue. While the user is taking a photo to obtain the current frame of image, the sharpness evaluation of each buffered image in the image buffer queue can be performed to obtain the corresponding sharpness value. And select one or more frames of the buffered image with the largest sharpness value as the key frame image to match the current frame image, and then determine whether to perform image fusion to generate the target image according to the matching result.
  • the target image can be combined with the features of the current frame image and the key frame image, effectively avoiding unclear images caused by jitter; image fusion can further improve the details of the current frame image.
  • there is no need to crop the buffered image which effectively avoids the loss of the field of view and effectively improves the image quality.
  • the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not be implemented in this application.
  • the implementation process of the example constitutes any limitation.
  • FIG. 4 shows a schematic block diagram of an image processing system 40 according to an embodiment of the present application.
  • the image processing system 40 includes:
  • the first trigger operation response module 401 may be configured to collect a target image of the current frame of the target scene in response to the first trigger operation.
  • the image buffer queue reading module 402 is used to read the image buffer queue at the current moment, and use preset rules to evaluate the definition of each buffer image in the image buffer queue at the current moment to obtain the definition of each buffer image value.
  • the matching module 403 is configured to extract the buffer image with the largest sharpness value as a key frame image, and match the key frame image with the current frame image.
  • the target image output module 404 is configured to perform fusion processing on the key frame image and the current frame target image to generate a target image when the key frame image is successfully matched with the current frame target image.
  • the image processing system of the embodiment of the present application can establish an image buffer queue in advance, so that when the terminal device takes a picture in response to the user's first trigger operation, it can obtain the current frame image while simultaneously checking each image in the image buffer queue.
  • the sharpness of the buffered image is evaluated to obtain the corresponding sharpness value, so that the buffered image with the largest sharpness value can be selected as the key frame image to match the current frame image, and then image fusion is performed according to the matching result to generate the target image.
  • the target image can be combined with the features of the current frame image and the key frame image, effectively avoiding unclear images caused by jitter.
  • the image processing system 40 further includes:
  • the current frame image output module is configured to use the current frame image as a target image when the key frame image fails to match the current frame image.
  • the image processing system 40 further includes:
  • the second trigger operation response module is configured to continuously collect the current image of the target scene and save it in the image buffer queue in response to the second trigger operation; and when the storage space of the image buffer queue is empty, collect The current image of overwrites the previous cached image that has been stored in the image cache queue.
  • the image buffer queue includes a plurality of image buffer queues configured respectively corresponding to each camera component.
  • the image buffer queue reading module 402 may also be configured to activate at least two of the camera components in response to a second triggering operation, collect the current image of the target scene through the two camera components, and save the current image in the corresponding image buffer. In the queue.
  • the image processing system 40 further includes:
  • the scene recognition module is used to perform scene recognition on the current image to obtain the scene category of the target scene.
  • the target image buffer queue confirmation module is used to determine the corresponding target camera component and the target image buffer queue corresponding to the target camera component according to the scene category, so as to read the target image in response to the first trigger request Cache queue.
  • the image processing system 40 further includes:
  • the pause processing module is used to respond to the first trigger operation to pause the acquisition of the current image corresponding to the target scene when the current frame image is acquired.
  • the image buffer queue includes at least one frame of the buffer image acquired after the target image of the current frame.
  • FIG. 5 shows a computer system 500 of an electronic device implementing an embodiment of the present invention according to an embodiment of the present application; the electronic device may be, for example, a mobile phone, a tablet computer, or the like.
  • the computer system 500 includes a central processing unit (Central Processing Unit, CPU) 501, which can be loaded into a random storage device according to a program stored in a read-only memory (Read-Only Memory, ROM) 502 or from a storage part 508. Access to the program in the memory (Random Access Memory, RAM) 503 to execute various appropriate actions and processing. In RAM 503, various programs and data required for system operation are also stored.
  • the CPU 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504.
  • An input/output (Input/Output, I/O) interface 505 is also connected to the bus 504.
  • the following components are connected to the I/O interface 505: the input part 506 including keyboard, mouse, etc.; including the output part 507 such as cathode ray tube (Cathode Ray Tube, CRT), liquid crystal display (LCD), and speakers 507
  • a storage part 508 including a hard disk, etc. and a communication part 509 including a network interface card such as a LAN (Local Area Network) card and a modem.
  • the communication section 509 performs communication processing via a network such as the Internet.
  • the driver 510 is also connected to the I/O interface 505 as needed.
  • a removable medium 511 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 510 as needed, so that the computer program read therefrom is installed into the storage portion 508 as needed.
  • an embodiment of the present invention includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from the network through the communication part 509, and/or installed from the removable medium 511.
  • CPU central processing unit
  • various functions defined in the system of the present application are executed.
  • the computer-readable medium shown in the embodiment of the present invention may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above.
  • Computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable removable Erasable Programmable Read Only Memory (EPROM), flash memory, optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable of the above The combination.
  • the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein.
  • This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • the computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium.
  • the computer-readable medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device .
  • the program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
  • the units described in the embodiments of the present invention may be implemented in software or hardware, and the described units may also be provided in a processor. Among them, the names of these units do not constitute a limitation on the unit itself under certain circumstances.
  • the present application also provides a computer-readable medium.
  • the computer-readable medium may be included in the electronic device described in the above-mentioned embodiments; it may also exist alone instead of Assemble into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by an electronic device, the electronic device realizes the method described in the following embodiments. For example, the electronic device can implement the steps shown in FIG. 1.
  • the disclosed system, device, and method can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the unit is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the unit described as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may also be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • this function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

Disclosed are an image processing method and system, an electronic device, and a computer-readable medium. The method comprises: in response to a first triggering operation, collecting the current frame image of a target scene; reading an image cache queue at the current moment, and performing sharpness evaluation on each cached image in the image cache queue at the current moment to acquire a sharpness value of each cached image; extracting the cached image with the maximum sharpness value as a key frame image, and matching the key frame image with the current frame image; and when the key frame image is successfully matched with the current frame image, performing fusion processing on the key frame image and the current frame image to generate a target image. The method, the system, the electronic device and the computer-readable medium of the embodiments of the present application can effectively avoid a blurry image caused by shaking during photographing.

Description

图像处理方法及系统、电子设备、计算机可读介质Image processing method and system, electronic equipment, and computer readable medium
本申请要求在2019年12月23日提交中国专利局、申请号为201911340930.0、发明名称为“图像处理方法及装置、计算机可读介质、电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on December 23, 2019, the application number is 201911340930.0, and the invention title is "Image processing methods and devices, computer readable media, and electronic equipment". The entire content of the application is approved The reference is incorporated in this application.
技术领域Technical field
本申请实施例涉及图像处理领域,并且更具体地,涉及一种图像处理方法、图像处理系统、电子设备和计算机可读介质。The embodiments of the present application relate to the field of image processing, and more specifically, to an image processing method, an image processing system, an electronic device, and a computer-readable medium.
背景技术Background technique
为了避免移动终端设备在拍照时由于抖动导致的图像模糊、不清晰等问题,为终端设备提供了防抖动功能。现有的防抖功能一般包括光学防抖和算法防抖。其中,光学防抖需要在终端设备中添加光学器件,导致终端设备的体积、重量增加。而算法防抖一般需要裁剪画面,导致视场角损失,且算法防抖在抖动幅度大时,裁剪画面并不能很好的补偿抖动带来的影响,表现为防抖效果不明显。In order to avoid problems such as blurry and unclear images caused by jitter when the mobile terminal device takes pictures, the terminal device provides an anti-shake function. The existing anti-shake functions generally include optical anti-shake and algorithmic anti-shake. Among them, the optical image stabilization requires the addition of optical components to the terminal equipment, resulting in an increase in the volume and weight of the terminal equipment. Algorithmic image stabilization generally needs to crop the image, resulting in a loss of field of view, and when the algorithmic image stabilization is large in jitter, the cropped image cannot well compensate for the effect of jitter, which is manifested as the anti-shake effect is not obvious.
发明内容Summary of the invention
有鉴于此,本申请实施例提供了一种图像处理方法、图像处理系统、电子设备和计算机可读介质,有利于提升终端设备拍摄图像的清晰度。In view of this, the embodiments of the present application provide an image processing method, an image processing system, an electronic device, and a computer-readable medium, which are beneficial to improve the definition of images captured by a terminal device.
第一方面,提供了一种图像处理方法,该图像处理方法包括:响应于第一触发操作,采集目标场景的当前帧图像;以及读取当前时刻的图像缓存队列,对所述当前时刻的图像缓存队列中各缓存图像进行清晰度评估以获取各所述缓存图像的清晰度值;提取清晰度值最大的所述缓存图像作为关键帧图像,并将所述关键帧图像与所述当前帧图像进行匹配;在所述关键帧图像与所述当前帧图像匹配成功时,对所述关键帧图像与所述当前帧图像进行融合处理,以生成目标图像。In a first aspect, an image processing method is provided. The image processing method includes: in response to a first trigger operation, collecting a current frame image of a target scene; Each cached image in the cache queue is evaluated for sharpness to obtain the sharpness value of each of the cached images; the cached image with the largest sharpness value is extracted as a key frame image, and the key frame image is compared with the current frame image Performing matching; when the key frame image is successfully matched with the current frame image, fusion processing is performed on the key frame image and the current frame image to generate a target image.
第二方面,提供了一种图像处理系统,该系统包括:第一触发操作响应模块,用于响应于第一触发操作,采集目标场景的当前帧目标图像;图像缓存队列读取模块,用于读取当前时刻的图像缓存队列,利用预设规则对所述当前时刻的图像缓存队列中各缓存图像进行清晰度评估以获取各所述缓存 图像的清晰度值;匹配模块,用于提取清晰度值最大的所述缓存图像作为关键帧图像,并将所述关键帧图像与所述当前帧图像进行匹配;目标图像输出模块,用于在所述关键帧图像与所述当前帧目标图像匹配成功时,对所述关键帧图像与所述当前帧目标图像进行融合处理,以生成目标图像。In a second aspect, an image processing system is provided. The system includes: a first trigger operation response module for collecting a current frame target image of a target scene in response to the first trigger operation; an image buffer queue reading module for Read the image buffer queue at the current moment, and use preset rules to evaluate the definition of each buffer image in the image buffer queue at the current moment to obtain the definition value of each buffer image; the matching module is used to extract the definition The buffer image with the largest value is used as a key frame image, and the key frame image is matched with the current frame image; a target image output module is used to successfully match the key frame image with the current frame target image At this time, fusion processing is performed on the key frame image and the current frame target image to generate a target image.
第三方面,提供了一种电子设备,包括:一个或多个处理器;存储装置,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器执行上述第一方面中的方法。In a third aspect, an electronic device is provided, including: one or more processors; a storage device, used to store one or more programs, when the one or more programs are executed by the one or more processors At this time, the one or more processors are caused to execute the method in the above-mentioned first aspect.
第四方面,提供了一种计算机可读介质,用于储存为执行上述第一方面中的方法所用的计算机软件指令,其包含用于执行上述各方面所设计的程序。In a fourth aspect, a computer-readable medium is provided for storing computer software instructions used to execute the method in the above-mentioned first aspect, which contains the programs designed to execute the above-mentioned aspects.
本申请中,电子设备以及交互系统等的名字对设备本身不构成限定,在实际实现中,这些设备可以以其他名称出现。只要各个设备的功能和本申请类似,属于本申请权利要求及其等同技术的范围之内。In this application, the names of electronic devices and interactive systems do not limit the devices themselves. In actual implementation, these devices may appear under other names. As long as the function of each device is similar to that of this application, it falls within the scope of the claims of this application and its equivalent technologies.
本申请的这些方面或其他方面在以下实施例的描述中会更加简明易懂。These and other aspects of the application will be more concise and understandable in the description of the following embodiments.
附图说明Description of the drawings
图1示出了本申请实施例的图像处理方法的示意图。Fig. 1 shows a schematic diagram of an image processing method according to an embodiment of the present application.
图2示出了本申请实施例的另一种图像处理方法的示意图。Fig. 2 shows a schematic diagram of another image processing method according to an embodiment of the present application.
图3示出了本申请实施例的一种多摄像模组时的图像处理方法的示意图。FIG. 3 shows a schematic diagram of an image processing method with multiple camera modules according to an embodiment of the present application.
图4示出了本申请实施例的图像处理系统的的示意性框图。Fig. 4 shows a schematic block diagram of an image processing system according to an embodiment of the present application.
图5示出了本申请实施例的电子设备的计算机系统的示意性框图。Fig. 5 shows a schematic block diagram of a computer system of an electronic device according to an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。The technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application.
应理解,本申请实施例的技术方案可以应用于各种具备拍照功能的移动终端设备,例如配置有摄像头的手机、平板电脑智能终端设备。It should be understood that the technical solutions of the embodiments of the present application can be applied to various mobile terminal devices with camera functions, such as mobile phones and tablet smart terminal devices equipped with cameras.
在现有的智能终端设备中,为了提供更好的拍照效果,加入各种防抖功能,用以改善用户抖动带来的图像模糊等问题。防抖方法主要分两类,包括 光学防抖和算法防抖。其中,光学防抖(Optical image stabilization,OIS)是依靠特殊的镜头或者CCD感光元件的结构在最大程度的降低操作者在使用过程中由于抖动造成影像不稳定。作为光学防抖技术,并不是让机身不抖动,它是依靠特殊的镜头或者CCD感光元件的结构在最大程度的降低操作者在使用过程中由于抖动造成影像不稳定。算法防抖又叫电子式防手震技术,这种技术在抖动检测方面利用了动态向量。根据动态向量来掌握图像的摆动方向及摆动量,以此为参考使图像位置平行移动,从而生成无抖动的动态图像。In the existing smart terminal equipment, in order to provide better photographing effects, various anti-shake functions are added to improve image blurring caused by user jitter. Anti-shake methods are mainly divided into two categories, including optical anti-shake and algorithm anti-shake. Among them, optical image stabilization (OIS) relies on the structure of a special lens or CCD photosensitive element to minimize image instability caused by jitter during use by the operator. As an optical image stabilization technology, it is not to keep the body from shaking. It relies on a special lens or the structure of a CCD sensor to minimize the image instability caused by the jitter during use by the operator. Algorithmic anti-shake is also called electronic anti-shake technology. This technology uses dynamic vectors in jitter detection. According to the motion vector to grasp the image swing direction and amount, use this as a reference to move the image position in parallel to generate a motion image without jitter.
为实现光学防抖效果,需要在手机里塞下更大的光学器件和光学防抖模块,对手机的体积,重量,研发成本,便携性等均有不小的考验。而算法防抖中目前手机厂商常用的方案为:裁剪一定比例的画面,裁剪掉的部分作为备份,当发生抖动时,根据陀螺仪信息,利用备份图像进行补偿,以保证画面的稳定,但算法防抖会经常裁剪画面,视场角会有一定的损失,普遍损失在10%以上,且算法防抖在抖动幅度大时,裁剪画面并不能很好的补偿抖动带来的影响,表现为防抖效果不明显。此外,防抖技术一般仅实现了预览、录像防抖功能。In order to achieve the optical image stabilization effect, a larger optical device and an optical image stabilization module need to be plugged into the mobile phone, which has a lot of tests on the size, weight, R&D cost, portability, etc. In the algorithm anti-shake, the current mobile phone manufacturers commonly used solutions are: crop a certain proportion of the picture, and the cropped part is used as a backup. When jitter occurs, the backup image is used to compensate according to the gyroscope information to ensure the stability of the picture, but the algorithm Anti-shake will often crop the image, and the field of view will have a certain loss, the general loss is more than 10%, and the algorithm anti-shake when the jitter amplitude is large, the cropped image cannot well compensate for the impact of jitter, and it is shown as anti-shake. The shaking effect is not obvious. In addition, anti-shake technology generally only realizes preview and video anti-shake functions.
图1示出了本申请实施例的一种图像处理方法的示意性。如图1所示,该图像处理方法包括以下部分或全部内容:Fig. 1 shows a schematic diagram of an image processing method according to an embodiment of the present application. As shown in Figure 1, the image processing method includes some or all of the following:
S11,响应于第一触发操作,采集目标场景的当前帧图像;以及S11, in response to the first triggering operation, collecting a current frame image of the target scene; and
S12,读取当前时刻的图像缓存队列,对所述当前时刻的图像缓存队列中各缓存图像进行清晰度评估以获取各所述缓存图像的清晰度值;S12: Read the image buffer queue at the current moment, and perform sharpness evaluation on each buffer image in the image buffer queue at the current moment to obtain the sharpness value of each buffer image;
S13,提取清晰度值最大的所述缓存图像作为关键帧图像,并将所述关键帧图像与所述当前帧图像进行匹配;S13, extracting the buffer image with the largest sharpness value as a key frame image, and matching the key frame image with the current frame image;
S14,在所述关键帧图像与所述当前帧图像匹配成功时,对所述关键帧图像与所述当前帧图像进行融合处理,以生成目标图像。S14: When the key frame image is successfully matched with the current frame image, perform fusion processing on the key frame image and the current frame image to generate a target image.
具体地,上述的图像处理方法可以应用于装配有至少一个摄像模组的电子设备,例如手机、平板电脑等终端设备。终端设备可以装配有多个摄像模组,例如装配有一主摄像模组、一微距摄像模组、一广角摄像模组、一深度摄像模组中的一个或多个。Specifically, the above-mentioned image processing method can be applied to electronic devices equipped with at least one camera module, such as terminal devices such as mobile phones and tablet computers. The terminal device may be equipped with multiple camera modules, for example, one or more of a main camera module, a macro camera module, a wide-angle camera module, and a depth camera module.
可选地,在本申请实施例中,上述的第一触发操作可以是用户在相机或 其他第三方拍摄应用中的拍摄操作。根据用户的第一触发操作,获取摄像模组当前采集的当前帧图像。Optionally, in this embodiment of the present application, the above-mentioned first trigger operation may be a user's shooting operation in a camera or other third-party shooting application. According to the user's first trigger operation, the current frame image currently collected by the camera module is acquired.
可选地,在本申请实施例中,在用户触发拍摄操作后,便可以生成一控制指令,并根据该控制指令停止对图形缓存队列的更新。同时,可以读取图像缓存队列,并利用预设的算法对当前时刻的图像缓存队列中各缓存图像进行清晰度评估,获取各缓存图像的清晰度评估数值。当前帧图像与缓存图像可以分别存储在不同的缓存区域中。例如,可以将当前帧图像存储在cache buffer缓存中,图像缓存队列存储在capture buffer缓存中。Optionally, in the embodiment of the present application, after the user triggers the shooting operation, a control instruction can be generated, and the update of the graphics cache queue can be stopped according to the control instruction. At the same time, the image buffer queue can be read, and the definition of each buffer image in the image buffer queue at the current moment can be evaluated by using a preset algorithm to obtain the definition evaluation value of each buffer image. The current frame image and the cached image can be stored in different cache areas respectively. For example, the current frame image can be stored in the cache buffer, and the image cache queue can be stored in the capture buffer.
举例来说,可以利用基于能量梯度函数的评估方法来计算缓存图像的清晰度,具体来说,公式可以包括:For example, an evaluation method based on an energy gradient function can be used to calculate the sharpness of the cached image. Specifically, the formula can include:
D(f)=Σ yΣ x(|f(x+1,y)-f(x,y)| 2+|f(x,y+1)-f(x,y)| 2) D(f)=Σ y Σ x (|f(x+1,y)-f(x,y)| 2 +|f(x,y+1)-f(x,y)| 2 )
其中,f(x,y)表示图像f对应像素点(x,y)的灰度值,D(f)为图像清晰度计算结果。Among them, f(x, y) represents the gray value of the pixel (x, y) corresponding to the image f, and D(f) is the result of the image definition calculation.
根据上述的方法可得到缓存队列每一帧缓存图像的清晰度评估结果数值。According to the above method, the sharpness evaluation result value of each frame of the buffer queue can be obtained.
可选地,在本申请实施例中,在获取当前缓存队列中各缓存图像对应的清晰度数值后,便可以选取清晰度数据最高的一帧缓存图像作为关键帧图像。再将该帧关键帧图像与用户拍摄的当前帧图像进行图像匹配。举例来说,可以利用SIFT(Scale-invariant feature transform,尺度不变特征转换)算法进行对关键帧图像和当前帧图像进行匹配的评估。Optionally, in this embodiment of the present application, after obtaining the definition value corresponding to each buffer image in the current buffer queue, a frame of the buffer image with the highest definition data may be selected as the key frame image. Then the key frame image is matched with the current frame image taken by the user. For example, the SIFT (Scale-invariant feature transform, scale-invariant feature transform) algorithm can be used to evaluate the matching between the key frame image and the current frame image.
一般来说,SIFT算法可以包括以下步骤:1)尺度空间极值检测:搜索所有尺度上的图像位置,通过高斯微分函数来识别潜在的对于尺度和旋转不变的兴趣点;2)关键点定位:在每个候选的位置上,通过一个拟合精细的模型来确定位置和尺度;关键点的选择依据于它们的稳定程度;3)方向确定:基于图像局部的梯度方向,分配给每个关键点位置一个或多个方向;所有后面的对图像数据的操作都相对于关键点的方向、尺度和位置进行变换,从而提供对于这些变换的不变性;4)关键点描述:在每个关键点周围的邻域内,在选定的尺度上测量图像局部的梯度;这些梯度被变换成一种表示,这种表示允许比较大的局部形状的变形和光照变化。在获取图像中各特征对应的描述子后,生成对应的特征向量。基于图像中各特征点的特征向量可以 获取整幅图像特征点描述向量。基于该描述向量,对当前帧图像和关键帧图像进行匹配。Generally speaking, the SIFT algorithm can include the following steps: 1) Scale space extreme value detection: Search for image positions on all scales, and identify potential points of interest that are invariant to scale and rotation through Gaussian differential functions; 2) Key point positioning : At the location of each candidate, a fine-fitting model is used to determine the location and scale; the selection of key points is based on their degree of stability; 3) Direction determination: Based on the local gradient direction of the image, it is assigned to each key Point position in one or more directions; all subsequent operations on the image data are transformed relative to the direction, scale and position of the key points, so as to provide invariance to these transformations; 4) Key point description: at each key point In the surrounding neighborhood, the local gradients of the image are measured at selected scales; these gradients are transformed into a representation that allows relatively large local shape deformation and illumination changes. After obtaining the descriptor corresponding to each feature in the image, the corresponding feature vector is generated. Based on the feature vector of each feature point in the image, the feature point description vector of the entire image can be obtained. Based on the description vector, the current frame image and the key frame image are matched.
可选地,在本申请实施例中,可以预先设置图像匹配的阈值。在关键帧图像与所述当前帧图像匹配结果大于预设阈值时,便可以确定为匹配成功。此时,便可以将关键帧图像与当前帧图像进行融合处理,从而将融合图像作为目标图像并展示给用户。Optionally, in this embodiment of the present application, the threshold for image matching may be preset. When the matching result of the key frame image and the current frame image is greater than the preset threshold, it can be determined that the matching is successful. At this point, the key frame image and the current frame image can be fused, so that the fused image is used as the target image and displayed to the user.
举例来说,图像融合可以使用多种算法,例如基于金字塔变换的融合算法、基于加权平均的融合方法或者基于PCNN(Pulse Coupled Neural Network,脉冲耦合神经网络)的融合方法等等。各算法的具体执行过程采用现有技术方案即可实现,本公开在此不再赘述。本公开对使用的具体融合算法不做特殊限定。For example, image fusion can use a variety of algorithms, such as a fusion algorithm based on pyramid transformation, a fusion method based on weighted average, or a fusion method based on PCNN (Pulse Coupled Neural Network), and so on. The specific execution process of each algorithm can be realized by adopting the existing technical solution, and the details are not repeated in this disclosure. The present disclosure does not specifically limit the specific fusion algorithm used.
可选地,在本申请实施例中,若关键帧图像与当前帧图像的匹配结果小于预设阈值,则不执行融合操作,并将当前帧图像作为目标图像提供给用户。Optionally, in the embodiment of the present application, if the matching result of the key frame image and the current frame image is less than the preset threshold, the fusion operation is not performed, and the current frame image is provided to the user as the target image.
或者,还可以对缓存图像的清晰度预先配置阈值,若关键帧图像的清晰度值小于预设阈值,则表示缓存图像的清晰度较低。便可以直接将当前帧图像作为目标图像提供给用户。Alternatively, a threshold may be pre-configured for the sharpness of the cached image, and if the sharpness value of the key frame image is less than the preset threshold, it indicates that the sharpness of the cached image is low. Then the current frame image can be directly provided to the user as the target image.
或者,对于图像缓存队列来说,也可以选取清晰度值排序最高的前n个缓存图像,分别与当前帧图像进行匹配,再选取匹配度最高的缓存图像作为最终的关键帧图像,并将该关键帧图像与当前帧图像进行融合。Or, for the image cache queue, you can also select the top n cache images with the highest sharpness value to match with the current frame image, and then select the cache image with the highest matching degree as the final key frame image, and then The key frame image is fused with the current frame image.
本申请实施例中提供的显示控制方法,通过选取清晰度值最大的缓存图像作为关键帧图像与当前帧图像进行匹配,再根据匹配结果进行图像融合生成目标图像。从而使得目标图像可以融合当前帧图像与关键帧图像的特征,有效的避免由于抖动导致的图像不清晰。The display control method provided in the embodiments of the present application selects the buffer image with the largest sharpness value as the key frame image to match the current frame image, and then performs image fusion according to the matching result to generate the target image. Thereby, the target image can be combined with the features of the current frame image and the key frame image, effectively avoiding unclear images caused by jitter.
图2示出了本申请实施例的一种图像处理方法的示意性。如图2所示,该图像处理方法包括以下部分或全部内容:Fig. 2 shows a schematic diagram of an image processing method according to an embodiment of the present application. As shown in Figure 2, the image processing method includes some or all of the following:
S10,响应于第二触发操作,连续采集所述目标场景的当前图像并保存至所述图像缓存队列中;S10, in response to a second trigger operation, continuously collect current images of the target scene and save them in the image buffer queue;
S11,响应于第一触发操作,采集目标场景的当前帧图像;以及S11, in response to the first triggering operation, collecting a current frame image of the target scene; and
S12,读取当前时刻的图像缓存队列,对所述当前时刻的图像缓存队列中各缓存图像进行清晰度评估以获取各所述缓存图像的清晰度值;S12: Read the image buffer queue at the current moment, and perform sharpness evaluation on each buffer image in the image buffer queue at the current moment to obtain the sharpness value of each buffer image;
S13,提取清晰度值最大的所述缓存图像作为关键帧图像,并将所述关键帧图像与所述当前帧图像进行匹配;S13, extracting the buffer image with the largest sharpness value as a key frame image, and matching the key frame image with the current frame image;
S14,在所述关键帧图像与所述当前帧图像匹配成功时,对所述关键帧图像与所述当前帧图像进行融合处理,以生成目标图像。S14: When the key frame image is successfully matched with the current frame image, perform fusion processing on the key frame image and the current frame image to generate a target image.
可选地,在本申请实施例中,上述的第二触发操作可以是用户打开相机应用程序,或者其他第三方拍摄应用程序并进入拍摄预览界面的控制操作。例如,参考图2所示,用户在终端设备的主界面中点击“相机”控件进入相机后,便可以建立一图像缓存队列,例如建立一RAW数据帧队列。同时,开始连续采集当前场景中的当前图像,将连续采集的当前图像保存至图像缓存队列中。举例来说,可以预先配置缓存队列的长度,例如设置队列长度为8、10或15等数值。可以在RAW数据帧队列中缓存连续采集的多张当前图像。并且在终端设备配置一独立的存储空间,用于存储缓存图像。Optionally, in the embodiment of the present application, the above-mentioned second trigger operation may be a control operation of the user opening a camera application or another third-party shooting application and entering the shooting preview interface. For example, referring to FIG. 2, after the user clicks the "camera" control on the main interface of the terminal device to enter the camera, an image buffer queue can be established, for example, a RAW data frame queue can be established. At the same time, start to continuously collect the current image in the current scene, and save the continuously collected current image to the image buffer queue. For example, the length of the buffer queue can be configured in advance, for example, the length of the queue can be set to a value such as 8, 10, or 15. Multiple current images acquired continuously can be buffered in the RAW data frame queue. And configure an independent storage space in the terminal device for storing cached images.
可选地,在本申请实施例中,若图像缓存队列存储空间为空,则将采集的当前图像覆盖所述图像缓存队列中已存储的前序缓存图像。例如,若RAW数据帧队列长度为10,当前队列中已缓存图像M1、M2……M9、M10,则当采集到M11时,将该M11覆盖M1,M12覆盖M2,依次类推。从而使得在用户拍照时,RAW数据帧队列中的各缓存图像为时间最接近用户拍照时刻对应的图像。Optionally, in the embodiment of the present application, if the storage space of the image buffer queue is empty, the current image collected is overwritten with the previous buffer image stored in the image buffer queue. For example, if the length of the RAW data frame queue is 10 and the current queue has cached images M1, M2...M9, M10, when M11 is collected, this M11 will cover M1, M12 will cover M2, and so on. As a result, when the user takes a photo, each buffered image in the RAW data frame queue is the image corresponding to the time closest to the time the user takes the photo.
可选地,在本申请实施例中,也可以配置当缓存队列存储空间为空时,仅保留当前缓存队列中最后的3或5个缓存图像,并将其他图像一并清除,从而为缓存队列提供较多的缓存空间。Optionally, in the embodiment of the present application, it can also be configured that when the storage space of the cache queue is empty, only the last 3 or 5 cache images in the current cache queue are retained, and the other images are cleared together to form the cache queue. Provide more cache space.
本申请实施例中提供的显示控制方法,通过预先建立图像缓存队列,使得终端设备在响应用户的第一触发操作进行拍照时,在获取当前帧图像的同时,可以对图像缓存队列中的各缓存图像进行清晰度评估来获取对应的清晰度值。The display control method provided in the embodiments of the present application establishes an image buffer queue in advance, so that when the terminal device takes a picture in response to the first trigger operation of the user, it can acquire the current frame image while simultaneously buffering each image in the image buffer queue. The image is evaluated for sharpness to obtain the corresponding sharpness value.
可选地,在本申请实施例中,基于上述内容,参考图3所示,上述的方法还可以包括:Optionally, in the embodiment of the present application, based on the foregoing content and referring to FIG. 3, the foregoing method may further include:
步骤S21,响应于第二触发操作激活至少两个所述摄像组件,分别通过两个摄像组件采集所述目标场景的当前图像,并保存至对应的图像缓存队列中;Step S21: Activate at least two of the camera components in response to the second triggering operation, collect the current images of the target scene through the two camera components, and save them in the corresponding image buffer queue;
步骤S22,对所述当前图像进行场景识别以获取所述目标场景的场景类别;Step S22, performing scene recognition on the current image to obtain the scene category of the target scene;
步骤S23,根据所述场景类别确定对应的目标摄像组件以及目标摄像组件对应的目标图像缓存队列,以用于在响应所述第一触发请求时,读取所述目标图像缓存队列。Step S23: Determine the corresponding target camera component and the target image cache queue corresponding to the target camera component according to the scene category, so as to read the target image cache queue when responding to the first trigger request.
具体来说,用户在进入摄像功能或者打开其他第三方摄像应用程序时,可以同时启动多个摄像模组。例如,同时启动主摄像模组和广角摄像模组。此时便可以分别针对主摄像模组和广角摄像模组创建对应的图像缓存队列。在用户拍摄当前帧图像后,便可以对当前帧对应的场景进行识别;例如,微距拍摄场景、广角拍摄场景、夜晚拍摄场景或者运动拍摄场景等。用户可以预先配置拍摄场景与目标摄像组件之间的对应关系。在确定对应的拍摄场景后,便可以确定对应的目标摄像组件,并提取该目标摄像组件对应的图像缓存队列。再对该图像缓存队列中的缓存图像进行清晰度评估,而对其他缓存队列进行清空处理。从而使得最终生成的融合图像可以具有更加优秀的显示效果。Specifically, when the user enters the camera function or opens other third-party camera applications, multiple camera modules can be activated at the same time. For example, start the main camera module and the wide-angle camera module at the same time. At this time, corresponding image buffer queues can be created for the main camera module and the wide-angle camera module respectively. After the user captures the current frame of image, the scene corresponding to the current frame can be identified; for example, a macro shooting scene, a wide-angle shooting scene, a night shooting scene, or a sports shooting scene. The user can pre-configure the correspondence between the shooting scene and the target camera component. After the corresponding shooting scene is determined, the corresponding target camera component can be determined, and the image buffer queue corresponding to the target camera component can be extracted. Then, the definition of the buffered image in the image buffering queue is evaluated, and the other buffering queues are cleared. As a result, the finally generated fused image can have a better display effect.
可选地,在本申请实施例中,上述的图像缓存队列中还可以包括至少一帧所述当前帧目标图像后采集的缓存图像。Optionally, in the embodiment of the present application, the above-mentioned image buffer queue may also include at least one frame of the current frame of the target image after the acquisition of the buffer image.
举例来说,当用户在t时刻进行拍摄并获取当前帧图像后,可以在t+1时刻继续采集缓存图像并保存至图像缓存队列中。用户在t时刻采集的当前帧图像可以是图像缓存队列中的一帧图像。For example, after the user shoots and obtains the current frame image at time t, the user can continue to collect the buffered image at time t+1 and save it in the image buffer queue. The current frame image collected by the user at time t may be a frame of image in the image buffer queue.
在用户拍摄图像时,在采集当前帧图像后,可能仍然保持当前的拍摄姿态,以及拍摄的场景内容。此时,便可以仍然采集与当前帧图像连续的一帧或多帧图像。从而使得若图像缓存队列中当前帧图像之前的各帧缓存图像清晰度不高时,仍可以利用当前帧图像之后的多帧图像进行匹配和融合。从而最大限度的避免图像缓存队列中没有可用的缓存图像,或者清晰度均较低的情况。When the user takes an image, after the current frame of image is collected, the current shooting posture and the content of the shot scene may still be maintained. At this point, you can still collect one or more frames that are continuous with the current frame of image. Therefore, if the definition of each frame buffer image before the current frame image in the image buffer queue is not high, the multiple frames after the current frame image can still be used for matching and fusion. In this way, the situation where there is no available buffer image in the image buffer queue or the definition is low can be avoided to the greatest extent.
因此,本申请实施例的图像处理方法,通过在用户进入拍摄功能时建立各摄像组件对应的图像缓存队列,并在图像缓存队列中存储当前场景对应的缓存图像。在用户执行拍照获取当前帧图像的同时,可以对图像缓存队列中的各缓存图像进行清晰度评估来获取对应的清晰度值。并选取清晰度值最大 的一帧或多帧缓存图像作为关键帧图像与当前帧图像进行匹配,再根据匹配结果判断是否进行图像融合来生成目标图像。从而使得目标图像可以融合当前帧图像与关键帧图像的特征,有效的避免由于抖动导致的图像不清晰;通过图像融合可以进一步提升当前帧图像的细节。并且,也不需要对缓存图像进行裁剪画面,有效的避免视场角的损失,有效的提升图像质量。Therefore, the image processing method of the embodiment of the present application establishes an image cache queue corresponding to each camera component when the user enters the shooting function, and stores the cache image corresponding to the current scene in the image cache queue. While the user is taking a photo to obtain the current frame of image, the sharpness evaluation of each buffered image in the image buffer queue can be performed to obtain the corresponding sharpness value. And select one or more frames of the buffered image with the largest sharpness value as the key frame image to match the current frame image, and then determine whether to perform image fusion to generate the target image according to the matching result. Thus, the target image can be combined with the features of the current frame image and the key frame image, effectively avoiding unclear images caused by jitter; image fusion can further improve the details of the current frame image. Moreover, there is no need to crop the buffered image, which effectively avoids the loss of the field of view and effectively improves the image quality.
应理解,本文中术语“系统”和“网络”在本文中常被可互换使用。本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。It should be understood that the terms "system" and "network" in this article are often used interchangeably in this article. The term "and/or" in this article is only an association relationship describing the associated objects, which means that there can be three relationships, for example, A and/or B, which can mean: A alone exists, A and B exist at the same time, exist alone B these three situations. In addition, the character "/" in this text generally indicates that the associated objects before and after are in an "or" relationship.
还应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should also be understood that, in the various embodiments of the present application, the size of the sequence number of the above-mentioned processes does not mean the order of execution, and the execution order of each process should be determined by its function and internal logic, and should not be implemented in this application. The implementation process of the example constitutes any limitation.
上文中详细描述了根据本申请实施例的图像处理方法,下面将结合附图,描述根据本申请实施例的图像处理系统,方法实施例所描述的技术特征适用于以下系统实施例。The image processing method according to the embodiment of the present application is described in detail above. The image processing system according to the embodiment of the present application will be described below with reference to the accompanying drawings. The technical features described in the method embodiment are applicable to the following system embodiments.
图4示出了本申请实施例的图像处理系统40的示意性框图。如图4所示,该图像处理系统40包括:FIG. 4 shows a schematic block diagram of an image processing system 40 according to an embodiment of the present application. As shown in FIG. 4, the image processing system 40 includes:
所述第一触发操作响应模块401可以用于响应于第一触发操作,采集目标场景的当前帧目标图像。The first trigger operation response module 401 may be configured to collect a target image of the current frame of the target scene in response to the first trigger operation.
图像缓存队列读取模块402,用于读取当前时刻的图像缓存队列,利用预设规则对所述当前时刻的图像缓存队列中各缓存图像进行清晰度评估以获取各所述缓存图像的清晰度值。The image buffer queue reading module 402 is used to read the image buffer queue at the current moment, and use preset rules to evaluate the definition of each buffer image in the image buffer queue at the current moment to obtain the definition of each buffer image value.
匹配模块403,用于提取清晰度值最大的所述缓存图像作为关键帧图像,并将所述关键帧图像与所述当前帧图像进行匹配。The matching module 403 is configured to extract the buffer image with the largest sharpness value as a key frame image, and match the key frame image with the current frame image.
目标图像输出模块404,用于在所述关键帧图像与所述当前帧目标图像匹配成功时,对所述关键帧图像与所述当前帧目标图像进行融合处理,以生成目标图像。The target image output module 404 is configured to perform fusion processing on the key frame image and the current frame target image to generate a target image when the key frame image is successfully matched with the current frame target image.
因此,本申请实施例的图像处理系统,能够通过预先建立图像缓存队列,使得终端设备在响应用户的第一触发操作进行拍照时,在获取当前帧图像的 同时,可以对图像缓存队列中的各缓存图像进行清晰度评估来获取对应的清晰度值,从而可以选取清晰度值最大的缓存图像作为关键帧图像与当前帧图像进行匹配,再根据匹配结果进行图像融合生成目标图像。从而使得目标图像可以融合当前帧图像与关键帧图像的特征,有效的避免由于抖动导致的图像不清晰。Therefore, the image processing system of the embodiment of the present application can establish an image buffer queue in advance, so that when the terminal device takes a picture in response to the user's first trigger operation, it can obtain the current frame image while simultaneously checking each image in the image buffer queue. The sharpness of the buffered image is evaluated to obtain the corresponding sharpness value, so that the buffered image with the largest sharpness value can be selected as the key frame image to match the current frame image, and then image fusion is performed according to the matching result to generate the target image. Thereby, the target image can be combined with the features of the current frame image and the key frame image, effectively avoiding unclear images caused by jitter.
可选地,在本申请实施例中,所述图像处理系统40还包括:Optionally, in this embodiment of the present application, the image processing system 40 further includes:
当前帧图像输出模块,用于在所述关键帧图像与所述当前帧图像匹配失败时,将所述当前帧图像作为目标图像。The current frame image output module is configured to use the current frame image as a target image when the key frame image fails to match the current frame image.
可选地,在本申请实施例中,所述图像处理系统40还包括:Optionally, in this embodiment of the present application, the image processing system 40 further includes:
第二触发操作响应模块,用于响应于第二触发操作,连续采集所述目标场景的当前图像并保存至所述图像缓存队列中;以及在所述图像缓存队列存储空间为空时,将采集的当前图像覆盖所述图像缓存队列中已存储的前序缓存图像。The second trigger operation response module is configured to continuously collect the current image of the target scene and save it in the image buffer queue in response to the second trigger operation; and when the storage space of the image buffer queue is empty, collect The current image of overwrites the previous cached image that has been stored in the image cache queue.
可选地,在本申请实施例中,所述图像缓存队列包括分别对应于各摄像组件配置的多个图像缓存队列。Optionally, in this embodiment of the present application, the image buffer queue includes a plurality of image buffer queues configured respectively corresponding to each camera component.
所述图像缓存队列读取模块402还可以用于响应于第二触发操作激活至少两个所述摄像组件,分别通过两个摄像组件采集所述目标场景的当前图像,并保存至对应的图像缓存队列中。The image buffer queue reading module 402 may also be configured to activate at least two of the camera components in response to a second triggering operation, collect the current image of the target scene through the two camera components, and save the current image in the corresponding image buffer. In the queue.
可选地,在本申请实施例中,所述图像处理系统40还包括:Optionally, in this embodiment of the present application, the image processing system 40 further includes:
场景识别模块,用于对所述当前图像进行场景识别以获取所述目标场景的场景类别。The scene recognition module is used to perform scene recognition on the current image to obtain the scene category of the target scene.
目标图像缓存队列确认模块,用于根据所述场景类别确定对应的目标摄像组件以及目标摄像组件对应的目标图像缓存队列,以用于在响应所述第一触发请求时,读取所述目标图像缓存队列。The target image buffer queue confirmation module is used to determine the corresponding target camera component and the target image buffer queue corresponding to the target camera component according to the scene category, so as to read the target image in response to the first trigger request Cache queue.
可选地,在本申请实施例中,所述图像处理系统40还包括:Optionally, in this embodiment of the present application, the image processing system 40 further includes:
暂停处理模块,用于响应于第一触发操作,采集当前帧图像时,暂停采集所述目标场景对应的当前图像。The pause processing module is used to respond to the first trigger operation to pause the acquisition of the current image corresponding to the target scene when the current frame image is acquired.
可选地,在本申请实施例中,所述图像缓存队列中包括至少一帧所述当前帧目标图像后采集的缓存图像。Optionally, in this embodiment of the present application, the image buffer queue includes at least one frame of the buffer image acquired after the target image of the current frame.
应理解,根据本申请实施例的图像处理系统40中的各个单元的上述和 其它操作和/或功能分别为了实现图1方法中的相应流程,为了简洁,在此不再赘述。It should be understood that the above-mentioned and other operations and/or functions of each unit in the image processing system 40 according to the embodiment of the present application are used to implement the corresponding process in the method in FIG. 1, and are not repeated here for brevity.
图5示出了本申请实施例的实现本发明实施例的电子设备的计算机系统500;所述电子设备可以是例如手机、平板电脑等设备。FIG. 5 shows a computer system 500 of an electronic device implementing an embodiment of the present invention according to an embodiment of the present application; the electronic device may be, for example, a mobile phone, a tablet computer, or the like.
需要说明的是,图5示出的电子设备的计算机系统500仅是一个示例,不应对本发明实施例的功能和使用范围带来任何限制。It should be noted that the computer system 500 of the electronic device shown in FIG. 5 is only an example, and should not bring any limitation to the functions and scope of use of the embodiments of the present invention.
如图5所示,计算机系统500包括中央处理单元(Central Processing Unit,CPU)501,其可以根据存储在只读存储器(Read-Only Memory,ROM)502中的程序或者从存储部分508加载到随机访问存储器(Random Access Memory,RAM)503中的程序而执行各种适当的动作和处理。在RAM 503中,还存储有系统操作所需的各种程序和数据。CPU 501、ROM 502以及RAM 503通过总线504彼此相连。输入/输出(Input/Output,I/O)接口505也连接至总线504。As shown in FIG. 5, the computer system 500 includes a central processing unit (Central Processing Unit, CPU) 501, which can be loaded into a random storage device according to a program stored in a read-only memory (Read-Only Memory, ROM) 502 or from a storage part 508. Access to the program in the memory (Random Access Memory, RAM) 503 to execute various appropriate actions and processing. In RAM 503, various programs and data required for system operation are also stored. The CPU 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (Input/Output, I/O) interface 505 is also connected to the bus 504.
以下部件连接至I/O接口505:包括键盘、鼠标等的输入部分506;包括诸如阴极射线管(Cathode Ray Tube,CRT)、液晶显示器(Liquid Crystal Display,LCD)等以及扬声器等的输出部分507;包括硬盘等的存储部分508;以及包括诸如LAN(Local Area Network,局域网)卡、调制解调器等的网络接口卡的通信部分509。通信部分509经由诸如因特网的网络执行通信处理。驱动器510也根据需要连接至I/O接口505。可拆卸介质511,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器510上,以便于从其上读出的计算机程序根据需要被安装入存储部分508。The following components are connected to the I/O interface 505: the input part 506 including keyboard, mouse, etc.; including the output part 507 such as cathode ray tube (Cathode Ray Tube, CRT), liquid crystal display (LCD), and speakers 507 A storage part 508 including a hard disk, etc.; and a communication part 509 including a network interface card such as a LAN (Local Area Network) card and a modem. The communication section 509 performs communication processing via a network such as the Internet. The driver 510 is also connected to the I/O interface 505 as needed. A removable medium 511, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is installed on the drive 510 as needed, so that the computer program read therefrom is installed into the storage portion 508 as needed.
特别地,根据本发明的实施例,下文参考流程图描述的过程可以被实现为计算机软件程序。例如,本发明的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分509从网络上被下载和安装,和/或从可拆卸介质511被安装。在该计算机程序被中央处理单元(CPU)501执行时,执行本申请的系统中限定的各种功能。In particular, according to an embodiment of the present invention, the process described below with reference to the flowchart can be implemented as a computer software program. For example, an embodiment of the present invention includes a computer program product, which includes a computer program carried on a computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from the network through the communication part 509, and/or installed from the removable medium 511. When the computer program is executed by the central processing unit (CPU) 501, various functions defined in the system of the present application are executed.
需要说明的是,本发明实施例所示的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读 存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本发明中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本发明中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无线、有线等等,或者上述的任意合适的组合。It should be noted that the computer-readable medium shown in the embodiment of the present invention may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two. The computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable removable Erasable Programmable Read Only Memory (EPROM), flash memory, optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable of the above The combination. In the present invention, the computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device. In the present invention, a computer-readable signal medium may include a data signal propagated in a baseband or as a part of a carrier wave, and a computer-readable program code is carried therein. This propagated data signal can take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. The computer-readable signal medium may also be any computer-readable medium other than the computer-readable storage medium. The computer-readable medium may send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device . The program code contained on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
描述于本发明实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现,所描述的单元也可以设置在处理器中。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定。The units described in the embodiments of the present invention may be implemented in software or hardware, and the described units may also be provided in a processor. Among them, the names of these units do not constitute a limitation on the unit itself under certain circumstances.
需要说明的是,作为另一方面,本申请还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被一个该电子设备执行时,使得该电子设备实现如下述实施例中所述的方法。例如,所述的电子设备可以实现如图1所示的各个步骤。It should be noted that, as another aspect, the present application also provides a computer-readable medium. The computer-readable medium may be included in the electronic device described in the above-mentioned embodiments; it may also exist alone instead of Assemble into the electronic device. The above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by an electronic device, the electronic device realizes the method described in the following embodiments. For example, the electronic device can implement the steps shown in FIG. 1.
此外,上述附图仅是根据本发明示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。In addition, the above-mentioned drawings are merely schematic illustrations of the processing included in the method according to the exemplary embodiment of the present invention, and are not intended for limitation. It is easy to understand that the processing shown in the above drawings does not indicate or limit the time sequence of these processings. In addition, it is easy to understand that these processes can be executed synchronously or asynchronously in multiple modules, for example.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person of ordinary skill in the art may realize that the units and algorithm steps of the examples described in combination with the embodiments disclosed herein can be implemented by electronic hardware or a combination of computer software and electronic hardware. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and conciseness of description, the specific working process of the system, device and unit described above can refer to the corresponding process in the foregoing method embodiment, which will not be repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed system, device, and method can be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the unit is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
该作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The unit described as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may also be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
该功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以 存储程序代码的介质。If this function is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present application essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the various embodiments of the present application. The aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program code .
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应该以权利要求的保护范围为准。The above are only specific implementations of this application, but the protection scope of this application is not limited to this. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed in this application. Should be covered within the scope of protection of this application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.

Claims (20)

  1. 一种图像处理方法,其特征在于,包括:An image processing method, characterized in that it comprises:
    响应于第一触发操作,采集目标场景的当前帧图像;以及In response to the first trigger operation, collecting the current frame image of the target scene; and
    读取当前时刻的图像缓存队列,对所述当前时刻的图像缓存队列中各缓存图像进行清晰度评估以获取各所述缓存图像的清晰度值;Read the image buffer queue at the current moment, and perform a sharpness evaluation on each buffer image in the image buffer queue at the current moment to obtain the sharpness value of each buffer image;
    提取清晰度值最大的所述缓存图像作为关键帧图像,并将所述关键帧图像与所述当前帧图像进行匹配;Extracting the buffer image with the largest sharpness value as a key frame image, and matching the key frame image with the current frame image;
    在所述关键帧图像与所述当前帧图像匹配成功时,对所述关键帧图像与所述当前帧图像进行融合处理,以生成目标图像。When the key frame image and the current frame image are successfully matched, fusion processing is performed on the key frame image and the current frame image to generate a target image.
  2. 根据权利要求1所述的图像处理方法,其特征在于,所述方法还包括:The image processing method according to claim 1, wherein the method further comprises:
    在所述关键帧图像与所述当前帧图像匹配失败时,将所述当前帧图像作为目标图像。When the key frame image fails to match the current frame image, the current frame image is used as the target image.
  3. 根据权利要求1所述的图像处理方法,其特征在于,所述响应第一触发操作前,所述方法还包括:The image processing method according to claim 1, wherein before the responding to the first trigger operation, the method further comprises:
    响应于第二触发操作,连续采集所述目标场景的当前图像并保存至所述图像缓存队列中;以及In response to the second trigger operation, continuously collect the current image of the target scene and save it in the image buffer queue; and
    在所述图像缓存队列存储空间为空时,将采集的当前图像覆盖所述图像缓存队列中已存储的前序缓存图像。When the storage space of the image buffer queue is empty, the current image collected is overwritten with the previous buffer images stored in the image buffer queue.
  4. 根据权利要求1所述的图像处理方法,其特征在于,所述图像缓存队列包括分别对应于各摄像组件配置的多个图像缓存队列;所述方法还包括:The image processing method according to claim 1, wherein the image buffer queue comprises a plurality of image buffer queues configured respectively corresponding to each camera component; the method further comprises:
    响应于第二触发操作激活至少两个所述摄像组件,分别通过两个摄像组件采集所述目标场景的当前图像,并保存至对应的图像缓存队列中。In response to the second triggering operation, at least two of the camera components are activated, and the current images of the target scene are collected by the two camera components respectively, and stored in the corresponding image buffer queue.
  5. 根据权利要求4所述的图像处理方法,其特征在于,所述方法还包括:The image processing method according to claim 4, wherein the method further comprises:
    对所述当前图像进行场景识别以获取所述目标场景的场景类别;Performing scene recognition on the current image to obtain the scene category of the target scene;
    根据所述场景类别确定对应的目标摄像组件以及目标摄像组件对应的目标图像缓存队列,以用于在响应所述第一触发请求时,读取所述目标图像缓存队列。The corresponding target camera component and the target image buffer queue corresponding to the target camera component are determined according to the scene category, so as to read the target image buffer queue when responding to the first trigger request.
  6. 根据权利要求1所述的图像处理方法,其特征在于,所述响应于第一触发操作,采集当前帧图像时,所述方法还包括:The image processing method according to claim 1, wherein when the current frame image is collected in response to the first trigger operation, the method further comprises:
    暂停采集所述目标场景对应的当前图像。Pause collecting the current image corresponding to the target scene.
  7. 根据权利要求1所述的图像处理方法,其特征在于,所述图像缓存队列中包括至少一帧所述当前帧目标图像后采集的缓存图像。4. The image processing method according to claim 1, wherein the image buffer queue includes at least one frame of the current frame of the target image after the acquisition of the buffer image.
  8. 根据权利要求3所述的图像处理方法,其特征在于,所述第二触发操作为打开相机应用程序或第三方拍摄应用程序并进入拍摄预设界面的控制操作;The image processing method according to claim 3, wherein the second trigger operation is a control operation of opening a camera application or a third-party photographing application and entering a photographing preset interface;
    在发生所述第二触发操作时,创建所述图像缓存队列。When the second trigger operation occurs, the image buffer queue is created.
  9. 根据权利要求1所述的图像处理方法,其特征在于,所述第一触发操作为在相机应用程序或第三方拍摄应用程序中的拍摄操作。The image processing method according to claim 1, wherein the first trigger operation is a shooting operation in a camera application or a third-party shooting application.
  10. 一种图像处理系统,其特征在于,所述图像处理系统包括:An image processing system, characterized in that the image processing system includes:
    第一触发操作响应模块,用于响应于第一触发操作,采集目标场景的当前帧目标图像;The first trigger operation response module is configured to collect the target image of the current frame of the target scene in response to the first trigger operation;
    图像缓存队列读取模块,用于读取当前时刻的图像缓存队列,利用预设规则对所述当前时刻的图像缓存队列中各缓存图像进行清晰度评估以获取各所述缓存图像的清晰度值;The image buffer queue reading module is used to read the image buffer queue at the current moment, and use preset rules to evaluate the definition of each buffer image in the image buffer queue at the current moment to obtain the definition value of each buffer image ;
    匹配模块,用于提取清晰度值最大的所述缓存图像作为关键帧图像,并将所述关键帧图像与所述当前帧图像进行匹配;A matching module, configured to extract the buffer image with the largest sharpness value as a key frame image, and match the key frame image with the current frame image;
    目标图像输出模块,用于在所述关键帧图像与所述当前帧目标图像匹配成功时,对所述关键帧图像与所述当前帧目标图像进行融合处理,以生成目标图像。The target image output module is configured to perform fusion processing on the key frame image and the current frame target image to generate a target image when the key frame image is successfully matched with the current frame target image.
  11. 根据权利要求10所述的图像处理系统,其特征在于,所述图像处理系统还包括:The image processing system according to claim 10, wherein the image processing system further comprises:
    当前帧图像输出模块,用于在所述关键帧图像与所述当前帧图像匹配失败时,将所述当前帧图像作为目标图像。The current frame image output module is configured to use the current frame image as a target image when the key frame image fails to match the current frame image.
  12. 根据权利要求10所述的图像处理系统,其特征在于,所述图像处理系统还包括:The image processing system according to claim 10, wherein the image processing system further comprises:
    第二触发操作响应模块,用于响应于第二触发操作,连续采集所述目标场景的当前图像并保存至所述图像缓存队列中;以及在所述图像缓存队列存储空间为空时,将采集的当前图像覆盖所述图像缓存队列中已存储的前序缓存图像。The second trigger operation response module is configured to continuously collect the current image of the target scene and save it in the image buffer queue in response to the second trigger operation; and when the storage space of the image buffer queue is empty, collect The current image of overwrites the previous cached image that has been stored in the image cache queue.
  13. 根据权利要求10所述的图像处理系统,其特征在于,所述图像缓 存队列包括分别对应于各摄像组件配置的多个图像缓存队列;The image processing system according to claim 10, wherein the image buffer queue comprises a plurality of image buffer queues configured respectively corresponding to each camera component;
    所述图像缓存队列读取模块还用于响应于第二触发操作激活至少两个所述摄像组件,分别通过两个摄像组件采集所述目标场景的当前图像,并保存至对应的图像缓存队列中。The image buffer queue reading module is further configured to activate at least two of the camera components in response to a second triggering operation, collect current images of the target scene through the two camera components, and save them in the corresponding image buffer queue .
  14. 根据权利要求13所述的图像处理系统,其特征在于,所述图像处理系统还包括:The image processing system according to claim 13, wherein the image processing system further comprises:
    场景识别模块,用于对所述当前图像进行场景识别以获取所述目标场景的场景类别;A scene recognition module, configured to perform scene recognition on the current image to obtain the scene category of the target scene;
    目标图像缓存队列确认模块,用于根据所述场景类别确定对应的目标摄像组件以及目标摄像组件对应的目标图像缓存队列,以用于在响应所述第一触发请求时,读取所述目标图像缓存队列。The target image buffer queue confirmation module is used to determine the corresponding target camera component and the target image buffer queue corresponding to the target camera component according to the scene category, so as to read the target image in response to the first trigger request Cache queue.
  15. 根据权利要求10所述的图像处理系统,其特征在于,所述图像处理系统还包括:The image processing system according to claim 10, wherein the image processing system further comprises:
    暂停处理模块,用于响应于第一触发操作,采集当前帧图像时,暂停采集所述目标场景对应的当前图像。The pause processing module is used to respond to the first trigger operation to pause the acquisition of the current image corresponding to the target scene when the current frame image is acquired.
  16. 根据权利要求10所述的图像处理系统,其特征在于,所述图像缓存队列中包括至少一帧所述当前帧目标图像后采集的缓存图像。The image processing system according to claim 10, wherein the image buffer queue includes at least one frame of the current frame of the target image after the acquisition of the buffer image.
  17. 根据权利要求12所述的图像处理系统,其特征在于,所述第二触发操作为打开相机应用程序或第三方拍摄应用程序并进入拍摄预设界面的控制操作;The image processing system according to claim 12, wherein the second trigger operation is a control operation of opening a camera application or a third-party photographing application and entering a photographing preset interface;
    在发生所述第二触发操作时,创建所述图像缓存队列。When the second trigger operation occurs, the image buffer queue is created.
  18. 根据权利要求10所述的图像处理系统,其特征在于,所述第一触发操作为在相机应用程序或第三方拍摄应用程序中的拍摄操作。The image processing system according to claim 10, wherein the first trigger operation is a shooting operation in a camera application or a third-party shooting application.
  19. 一种电子设备,其特征在于,所述电子设备包括:An electronic device, characterized in that, the electronic device includes:
    一个或多个处理器;One or more processors;
    存储装置,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现如权利要求1至9中任一项所述的图像处理方法。The storage device is configured to store one or more programs, and when the one or more programs are executed by the one or more processors, the one or more processors implement any of claims 1 to 9 The image processing method described in one item.
  20. 一种计算机可读介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至9中任一项所述的图像处理 方法。A computer-readable medium having a computer program stored thereon, wherein the computer program implements the image processing method according to any one of claims 1-9 when the computer program is executed by a processor.
PCT/CN2020/138652 2019-12-23 2020-12-23 Image processing method and system, electronic device, and computer-readable medium WO2021129669A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911340930.0 2019-12-23
CN201911340930.0A CN111131698B (en) 2019-12-23 2019-12-23 Image processing method and device, computer readable medium and electronic equipment

Publications (1)

Publication Number Publication Date
WO2021129669A1 true WO2021129669A1 (en) 2021-07-01

Family

ID=70501280

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/138652 WO2021129669A1 (en) 2019-12-23 2020-12-23 Image processing method and system, electronic device, and computer-readable medium

Country Status (2)

Country Link
CN (1) CN111131698B (en)
WO (1) WO2021129669A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797164A (en) * 2021-09-09 2023-03-14 同方威视技术股份有限公司 Image splicing method, device and system in fixed view field

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111131698B (en) * 2019-12-23 2021-08-27 RealMe重庆移动通信有限公司 Image processing method and device, computer readable medium and electronic equipment
CN111562948B (en) * 2020-06-29 2020-11-10 深兰人工智能芯片研究院(江苏)有限公司 System and method for realizing parallelization of serial tasks in real-time image processing system
CN111726533B (en) * 2020-06-30 2021-11-16 RealMe重庆移动通信有限公司 Image processing method, image processing device, mobile terminal and computer readable storage medium
CN112367459B (en) * 2020-10-23 2022-05-13 深圳市锐尔觅移动通信有限公司 Image processing method, electronic device, and non-volatile computer-readable storage medium
CN112312023B (en) * 2020-10-30 2022-04-08 北京小米移动软件有限公司 Camera buffer queue allocation method and device, electronic equipment and storage medium
CN112435231A (en) * 2020-11-20 2021-03-02 深圳市慧鲤科技有限公司 Image quality scale generation method, and method and device for evaluating image quality
CN113706421B (en) * 2021-10-27 2022-02-22 深圳市慧鲤科技有限公司 Image processing method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090046208A1 (en) * 2007-08-14 2009-02-19 Samsung Electronics Co., Ltd. Image processing method and apparatus for generating intermediate frame image
CN105957008A (en) * 2016-05-10 2016-09-21 厦门美图之家科技有限公司 Panoramic image real-time stitching method and panoramic image real-time stitching system based on mobile terminal
US20170094192A1 (en) * 2015-09-28 2017-03-30 Gopro, Inc. Automatic composition of video with dynamic background and composite frames selected based on frame criteria
CN107155067A (en) * 2017-07-10 2017-09-12 珠海市魅族科技有限公司 Camera control method and device, terminal and storage medium
CN108063920A (en) * 2017-12-26 2018-05-22 深圳开立生物医疗科技股份有限公司 A kind of freeze frame method, apparatus, equipment and computer readable storage medium
CN108322658A (en) * 2018-03-29 2018-07-24 青岛海信移动通信技术股份有限公司 A kind of method and apparatus taken pictures
CN111131698A (en) * 2019-12-23 2020-05-08 RealMe重庆移动通信有限公司 Image processing method and device, computer readable medium and electronic equipment

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101054349B1 (en) * 2005-11-04 2011-08-04 재단법인 포항산업과학연구원 High-definition billboard display system complements data rates
JP2009016885A (en) * 2007-06-29 2009-01-22 Toshiba Corp Image transfer device
JP2015171097A (en) * 2014-03-10 2015-09-28 キヤノン株式会社 Image processing apparatus and control method thereof
CN104618640B (en) * 2014-12-30 2018-05-01 广东欧珀移动通信有限公司 A kind of photographic method and device
CN105578045A (en) * 2015-12-23 2016-05-11 努比亚技术有限公司 Terminal and shooting method of terminal
CN105578061A (en) * 2016-02-25 2016-05-11 广东欧珀移动通信有限公司 Anti-shaking method and device for photographing, and mobile terminal
CN106331491A (en) * 2016-08-29 2017-01-11 广东欧珀移动通信有限公司 Photographing method and device
CN107610075A (en) * 2017-08-29 2018-01-19 维沃移动通信有限公司 Image combining method and mobile terminal
CN109101931A (en) * 2018-08-20 2018-12-28 Oppo广东移动通信有限公司 A kind of scene recognition method, scene Recognition device and terminal device
CN110602467B (en) * 2019-09-09 2021-10-08 Oppo广东移动通信有限公司 Image noise reduction method and device, storage medium and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090046208A1 (en) * 2007-08-14 2009-02-19 Samsung Electronics Co., Ltd. Image processing method and apparatus for generating intermediate frame image
US20170094192A1 (en) * 2015-09-28 2017-03-30 Gopro, Inc. Automatic composition of video with dynamic background and composite frames selected based on frame criteria
CN105957008A (en) * 2016-05-10 2016-09-21 厦门美图之家科技有限公司 Panoramic image real-time stitching method and panoramic image real-time stitching system based on mobile terminal
CN107155067A (en) * 2017-07-10 2017-09-12 珠海市魅族科技有限公司 Camera control method and device, terminal and storage medium
CN108063920A (en) * 2017-12-26 2018-05-22 深圳开立生物医疗科技股份有限公司 A kind of freeze frame method, apparatus, equipment and computer readable storage medium
CN108322658A (en) * 2018-03-29 2018-07-24 青岛海信移动通信技术股份有限公司 A kind of method and apparatus taken pictures
CN111131698A (en) * 2019-12-23 2020-05-08 RealMe重庆移动通信有限公司 Image processing method and device, computer readable medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115797164A (en) * 2021-09-09 2023-03-14 同方威视技术股份有限公司 Image splicing method, device and system in fixed view field
CN115797164B (en) * 2021-09-09 2023-12-12 同方威视技术股份有限公司 Image stitching method, device and system in fixed view field

Also Published As

Publication number Publication date
CN111131698A (en) 2020-05-08
CN111131698B (en) 2021-08-27

Similar Documents

Publication Publication Date Title
WO2021129669A1 (en) Image processing method and system, electronic device, and computer-readable medium
US11386699B2 (en) Image processing method, apparatus, storage medium, and electronic device
WO2019134516A1 (en) Method and device for generating panoramic image, storage medium, and electronic apparatus
JP4274233B2 (en) Imaging apparatus, image processing apparatus, image processing method therefor, and program causing computer to execute the method
JP6961797B2 (en) Methods and devices for blurring preview photos and storage media
KR101679290B1 (en) Image processing method and apparatus
CN105704369B (en) A kind of information processing method and device, electronic equipment
CN111064895B (en) Virtual shooting method and electronic equipment
US20110187882A1 (en) Digital image capturing device providing photographing composition and method thereof
JP2020515982A (en) Image processing method, apparatus, computer-readable storage medium, and electronic device
JP2020053774A (en) Imaging apparatus and image recording method
JP2015126451A (en) Recording method for image, electronic equipment and computer program
CN112184722A (en) Image processing method, terminal and computer storage medium
KR20210053121A (en) Method and apparatus for training image processing model, and storage medium
WO2022083118A1 (en) Data processing method and related device
CN108259767B (en) Image processing method, image processing device, storage medium and electronic equipment
CN112153269A (en) Picture display method, device and medium applied to electronic equipment and electronic equipment
KR20140061226A (en) Method and apparatus for displaying image
CN112183200B (en) Eye movement tracking method and system based on video image
JP5805013B2 (en) Captured image display device, captured image display method, and program
EP3304551B1 (en) Adjusting length of living images
CN111385481A (en) Image processing method and device, electronic device and storage medium
CN108495038B (en) Image processing method, image processing device, storage medium and electronic equipment
JP2014085845A (en) Moving picture processing device, moving picture processing method, program and integrated circuit
CN114092323A (en) Image processing method, image processing device, storage medium and electronic equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20905824

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20905824

Country of ref document: EP

Kind code of ref document: A1