WO2018166170A1 - Image processing method and device, and intelligent conferencing terminal - Google Patents
Image processing method and device, and intelligent conferencing terminal Download PDFInfo
- Publication number
- WO2018166170A1 WO2018166170A1 PCT/CN2017/103282 CN2017103282W WO2018166170A1 WO 2018166170 A1 WO2018166170 A1 WO 2018166170A1 CN 2017103282 W CN2017103282 W CN 2017103282W WO 2018166170 A1 WO2018166170 A1 WO 2018166170A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- determining
- image frame
- current
- depth
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title abstract description 4
- 238000000034 method Methods 0.000 claims abstract description 36
- 230000003287 optical effect Effects 0.000 claims description 6
- 230000015572 biosynthetic process Effects 0.000 claims description 4
- 238000003786 synthesis reaction Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 4
- 206010008398 Change in sustained attention Diseases 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
Definitions
- the present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, and an intelligent conference terminal for image processing.
- a smart terminal usually has a video call function, and after the smart terminal establishes a connection with other smart terminals, it can perform a video call based on the video call function it has.
- the smart terminal captures the target object in real time through the camera to form an image frame, and continuously transmits the captured image frame to other intelligent terminal devices.
- a large intelligent terminal with a video call function such as a smart conference tablet
- the terminal itself is often fixed and generally disposed at a position opposite to the window, and the user participating in the video is based on the smart terminal performing a video call. It is often in a state of backlight.
- the image information of the user cannot be clearly displayed in the image frame captured by the camera on the smart terminal device, and the closer the user is located to the window, the less clear the user image information displayed in the image frame is.
- the image information in the image frame needs to be processed before the image frame is sent to other smart terminal devices.
- Embodiments of the present invention provide a method, an apparatus, and an intelligent conference terminal for image processing, which increase the flexibility of image processing, thereby achieving the purpose of clearly displaying a target object in a captured image frame during a video call.
- an embodiment of the present invention provides a method for image processing, including:
- the image region information is subjected to adjustment processing on the image region corresponding to the depth of field limit value.
- an apparatus for image processing including:
- a real image acquisition module configured to acquire a current live image frame captured by the camera
- a focused image determining module configured to determine a target focused image in the current live image frame
- a depth of field limit determining module configured to determine a depth of field limit value of the current live image frame according to the target focused image
- the image parameter adjustment module is configured to perform image parameter information adjustment processing on the image region corresponding to the depth of field limit value.
- an embodiment of the present invention provides an intelligent conference terminal, including: at least two cameras having optical axes parallel, and an apparatus for image processing according to the foregoing embodiment of the present invention.
- device and intelligent conference terminal for image processing, first, a current live image frame captured by a camera is acquired, and a target focused image in a current live image frame is determined; and then a depth of field of the current live image frame is determined according to the target focused image. The threshold value is finally adjusted for the image parameter information corresponding to the image region corresponding to the depth of field limit value.
- the above method, device and intelligent conference terminal can adjust the partial image in the image frame captured during the video call, and efficiently realize the determination and processing of the target area to be processed, thereby further increasing the flexibility of image processing. Effectively enhance the display effect of video participants on the smart terminal.
- FIG. 1 is a schematic flowchart diagram of a method for image processing according to Embodiment 1 of the present invention
- FIG. 2 is a schematic flowchart of a method for image processing according to Embodiment 2 of the present invention
- FIG. 3 is a structural block diagram of an apparatus for image processing according to Embodiment 3 of the present invention.
- FIG. 1 is a schematic flowchart of a method for image processing according to a first embodiment of the present invention.
- the method is applicable to image processing of a captured image frame during a video call, and the method may be performed by an image processing device, where The device can be implemented by software and/or hardware and is generally integrated on a smart terminal having a video call function.
- the smart terminal may be a smart mobile terminal such as a mobile phone, a tablet computer, or a notebook, or a fixed electronic device with a video call function such as a desktop computer or a smart conference terminal.
- the application scenario is preferably a video call.
- the specific image area where the indoor window is located is determined, thereby adjusting the image parameters such as image brightness and image sharpness of the image area where the indoor window is located.
- a method for image processing according to Embodiment 1 of the present invention includes the following operations:
- the image of the capture space can be captured by the camera in real time, thereby forming a current live image frame.
- one subject is selected as the target focused image.
- the dynamic subject in the capture space can be used as the target focused image.
- the image area corresponding to the dynamic subject is determined in the current real image frame, and the image corresponding to the preset pixel information may be used as the target focused image. In this case, the preset pixel information needs to be in the current real scene.
- the corresponding image area in the image frame is determined to be the target focused image.
- the actual distance of the target focused image to the front node of the camera may be determined, and the actual distance is equivalent to the focus distance of the camera at this time.
- the focus is The distance may be determined according to current pixel information of the image focused image and corresponding depth of field information, and further, the depth of field range of the image frame captured by the camera may be determined according to the focus distance and the attribute parameter of the camera.
- the depth of field range is formed by a depth of field near threshold value and a depth of field limit value
- the depth of field near threshold value can display the closest distance between the image in the current live image frame and the camera
- the depth of field limit value may specifically think of the farthest between the image and the camera that can be displayed in the current live image frame The distance, therefore, determines the depth of field limit value of the current live image frame based on the determined depth of field range.
- the current real image frame can be understood as an image frame having depth of field information.
- the image region corresponding to the far depth limit value of the depth of field can be determined in the current real image frame, and then determined.
- the image area is subjected to mediation processing based on its image parameter information.
- the video participant is reduced in order to reduce the light intensity of the indoor window in the captured current live picture frame.
- the effect of the display screen can be regarded as the area where the indoor window is located by using the image area corresponding to the far depth limit value, thereby locally adjusting the determined image area, thereby achieving the purpose of clearly displaying the video participants.
- a method for image processing according to Embodiment 1 of the present invention first acquires a current live image frame captured by a camera, and determines a target focused image in a current live image frame; and then determines a current live image frame according to the target focused image.
- the depth of field is far from the limit value; finally, the image parameter information is adjusted to the image area corresponding to the depth of field limit value.
- FIG. 2 is a schematic flowchart diagram of a method for image processing according to Embodiment 2 of the present invention.
- the embodiment of the present invention is optimized based on the foregoing embodiment.
- the current real-life image frame captured by the camera is further optimized to be: captured by at least two cameras respectively. a pre-image frame; performing image synthesizing processing on the at least two current image frames respectively captured to obtain a current real-image frame; wherein each pixel in the current live image frame has corresponding depth-of-field information.
- the target focused image in the current real image frame is also determined to be: determining a subject in the current live image frame according to the character image feature, and determining to form the taken person Current pixel information; determining whether the subject is present in the acquired previous live image frame; if the subject is present, determining the composition of the subject in the previous live image frame Historical pixel information, and determining whether the current pixel information matches the historical pixel information, and if not, determining that the position of the subject is changed, determining the subject to be a target focused image; if yes, Determining average pixel information according to current pixel information of each of the captured persons, determining an area corresponding to the average pixel information as a target focused image; if the captured person does not exist, acquiring preset focused pixel information, The corresponding area of the focused pixel information in the current live image frame is determined as the target focused image.
- determining the depth of field limit value of the current live image frame according to the target focused image may be optimized to: determine, according to current pixel information of the target focused image in the current live image frame, a plane coordinate information of the target focused image; determining a depth value of the target focused image according to the depth information corresponding to the current pixel information; determining the target focused image to the camera according to the plane coordinate information and the depth value The actual focus distance; determining the depth of field limit value of the current live image frame according to the actual focus distance and the acquired camera attribute parameter.
- the present embodiment further performs an adjustment process of the image parameter information on the image region corresponding to the depth of field limit value, and is specifically optimized to: acquire image parameter information of the image region corresponding to the depth of field limit value, the image parameter information.
- the method includes: an image RGB ratio, a color contrast, and an image sharpness; and when the image parameter information does not meet the set standard parameter information, controlling to adjust the image region Image brightness, color contrast, and/or image sharpness are such that the image parameter information conforms to the standard parameter information.
- a second embodiment of the present invention provides a method for image processing, which specifically includes the following operations:
- the setting positions of the at least two cameras used on the smart terminal are different, and for the same subject, the image captured by the object in different cameras is different.
- the pixel positions in the frame are different, and the depth of field information of the object can be determined according to different pixel position information.
- S202 Perform image synthesis processing on at least two current image frames respectively captured to obtain a current real image frame.
- the current image frames captured by different cameras can be combined to obtain a current live image frame having a stereoscopic sense.
- each pixel in the synthesized current real image frame has corresponding depth of field information.
- the process of determining the depth information of each pixel may be described as: stereo matching the current image frame captured by different cameras. Therefore, the disparity values of the same corresponding points in different current image frames are obtained, and then the depth information of different pixel points can be determined according to the relationship between the disparity values and the depths.
- the depth information of each pixel in the current live image frame may be stored for selection of a subsequent image area to be processed.
- steps S203 to S209 specifically specify the determination process of the target focused image.
- This step specifically identifies, by the preset character image feature, the determined person included in the current live image frame.
- the current pixel information of each subject in the current live image frame can also be determined, and the current pixel information can be specifically understood as a range of pixel values of all the pixels constituting one subject.
- step S204 Determine whether the subject is present in the acquired previous live image frame. If yes, go to step S205; if no, go to step S209.
- This step can be used to determine whether the subject in the current live image frame also appears in the previous live image frame.
- different captured characters themselves have characteristics that are different from other captured characters (such as the clothes of the captured person). Color and wearing ornaments, etc., so it is possible to determine whether the subject is present in the previous real image frame according to the characteristics of the subject in the current live image frame, and the determined scene is not present in the current live image frame.
- the operation of step S209 can be performed; if there is the determined subject person, the operation of step S205 can be performed.
- the step of determining the pixel position of the subject in the previous real image frame may be determined as the pixel position of the subject. Historical pixel information.
- step S206 Determine whether the current pixel information matches the historical pixel information. If not, perform the step. S207; if yes, step S208 is performed.
- the step may determine the historical pixel information of the determined person. Matches the current pixel information.
- step S207 if the subject is in the active dynamics, the historical pixel information in the previous real image frame and the current pixel information in the current live image frame cannot be completely matched, and the operation of step S207 can be performed. If the subject is in a stationary state, and its historical pixel information has a possibility of matching with the current pixel information, the operation of step S208 can be performed at this time.
- step S207 It is determined that the position of the subject is changed, and the subject is determined as the target focused image, and then step S210 is performed.
- step S210 when the historical pixel information of the object does not match the current pixel information, it may be determined that there is a change in the subject, and the subject may be determined as the target focused image, and the target may be determined. After the image is focused, the operation of step S210 is performed.
- the subject with the lowest degree of matching of the historical pixel information and the current pixel information may be selected as the target focused image.
- the degree of matching between the historical pixel information and the current pixel information may be specifically determined according to the number of matching pixel points, and the smaller the number of matched pixel points, the lower the matching degree.
- step S210 if the current pixel information of each of the captured persons in the current live image frame matches the historical pixel information, it may be determined that the subject is still, and the step may be based on current pixel information of each of the captured characters. Determining the average pixel information of all the subjects in the current live image frame, thereby The area corresponding to the average pixel information is determined as the target focused image, and the operation of step S210 can be performed after the target focused image is determined.
- step S209 The preset focus pixel information is acquired, and the corresponding region of the focused pixel information in the current live image frame is determined as the target focused image, and then step S210 is performed.
- This step processes the case where there is no object in the previous real image frame.
- the case where there is no object is generally that the captured current live image frame is the captured first frame, and there is no previous real scene. Image frame; or, the captured previous live image frame does not actually have a subject.
- the preset focus pixel information may be acquired, and then the area corresponding to the focused pixel information is determined in the current live image frame, and the determined The area is focused on the image as the target, and the operation of step S210 can be performed after the target focused image is determined.
- the captureable range of the camera disposed on the smart terminal is generally fixed, so that the present embodiment can set the focused pixel information according to the pixel information corresponding to the focused image determined during the capture of the historical image frame. .
- the current live image frame is synthesized by the current image frame captured by at least two cameras, and the current live image frame includes spatial information of each image (plane coordinate information displayed on the screen and depth of rendering stereoscopic effect) value).
- the embodiment may determine the corresponding plane coordinate information according to the current pixel information of the target focused image. Specifically, an average pixel coordinate value may be determined according to the pixel coordinate value of each pixel point in the current pixel information, which may be averaged in this embodiment. The pixel coordinate value is regarded as the plane coordinate information of the target focused image.
- the present embodiment can determine the depth information corresponding to the average pixel coordinate value, and use the depth information as the depth value of the target focused image.
- the projection point of the target focused image in the stereoscopic space may be determined according to the plane coordinate information and the depth value, specifically, the pixel origin of the upper left corner of the screen of the smart terminal, and the target focused image is determined in the stereoscopic space.
- the actual distance value of the projection point to the pixel origin can be determined according to the plane coordinate information and the depth value, and the calculated actual distance value can be regarded as the actual focus distance of the target focused image to the camera.
- the camera attribute parameters may include: a hyper focus distance and a lens focal length, wherein the hyper focus distance and the lens focal length are both determined by the type of camera used. Specifically, according to the actual focus distance and the acquired camera attribute parameters, and the calculation formula of the depth of field near the boundary And the formula for calculating the far depth of the depth of field To determine the current real image frame depth near the limit value and King reaching the limit value, wherein, S near represents depth near the limit value, S away showing scene reaching the limit value, H denotes hyperfocal camera distance, D denote the actual focusing distance, F represents the lens focal length of the camera.
- the hyperfocal distance is 6.25 meters (the blurring circle standard)
- the lens focal length of the camera is 50 mm
- the real-focus distance is 4 meters
- the value is 11.36 meters.
- the depth of field limit value is equivalent to the farthest distance that the camera can capture the image, and corresponds to the farthest image area in the current real image frame.
- the embodiment may determine the far depth limit value according to the depth of field.
- the image area and image parameter information of the image area such as image RGB ratio, color contrast, and image sharpness.
- the image RGB ratio can be used to determine the brightness value of the image region;
- the color contrast can be a measure of different brightness levels between the brightest white and the darkest black in the light and dark regions of the image region.
- the image sharpness can be understood as an index reflecting the image plane sharpness and the image edge sharpness, and the image sharpness is higher.
- the detail contrast on the image plane is also higher and looks more clear.
- the image parameter information may be compared with the set standard parameter information, and the image brightness, the color contrast, and/or the image sharpness are respectively adjusted according to the comparison result, and finally the image parameters are finally obtained.
- the information conforms to the standard parameter information.
- the image area corresponding to the far depth limit value is a window image with higher brightness
- the display brightness of the window image can be appropriately reduced, thereby achieving clearness in the current real image frame.
- the purpose of displaying video participant image information is a window image with higher brightness
- the method for image processing provided by the second embodiment of the present invention embodies the process of acquiring the image frame, and at the same time, the process of determining the target focused image is embodied, and the process of determining the depth threshold of the depth of field and the far boundary of the depth of field are also embodied.
- the method can acquire the image frame synthesized by the dual camera capture, and can determine the depth of field limit value of the image frame according to the depth information of the synthesized image frame and the determined target focus image, thereby performing an image on the region corresponding to the far depth limit value of the depth of field Mediation processing.
- the determination and processing of the target area to be processed are efficiently realized, the overall processing of the entire image frame is effectively avoided, the flexibility of image processing is better, and the image processing efficiency during video call is improved. This further enhances the display effect of video participants on the smart terminal.
- the embodiment further optimizes: performing brightness enhancement processing on the subject.
- the adjustment processing of the image region corresponding to the depth of field limit value can be realized, so that the current live image frame has a clear image of the video participant.
- the recognized person recognized in the current live image frame can be regarded as a video participant, the selected image area can be processed directly, and the recognized subject can be directly subjected to brightness enhancement processing.
- the specific pixel to be processed may be determined according to the current pixel information of the captured person and the corresponding depth information, and then the image parameter information of the to-be-processed area is determined, and the image parameter information is adjusted to improve the brightness of the captured person. Improved, so that the subject can have a better display in the current live image frame.
- FIG. 3 is a structural block diagram of an apparatus for image processing according to Embodiment 3 of the present invention.
- the device is suitable for image processing of captured image frames during a video call, wherein the device can be implemented by software and/or hardware and is generally integrated on a smart terminal having a video call function.
- the apparatus includes: a live image acquisition module 31, a focused image determination module 32, a depth of field limit determination module 33, and an image parameter adjustment module 34.
- the real image acquisition module 31 is configured to acquire a current real image frame captured by the camera;
- a focused image determining module 32 configured to determine a target focused image in the current live image frame
- a depth of field limit determining module 33 configured to determine a depth of field limit value of the current live image frame according to the target focused image
- the image parameter adjustment module 34 is configured to perform image parameter information adjustment processing on the image region corresponding to the depth of field limit value.
- the device first acquires the current live image frame captured by the camera through the live image acquisition module 31; then determines the target focused image in the current live image frame by the focused image determination module 32; and then passes the depth of field determination module 33 Determining the depth of field limit value of the current live image frame according to the target focused image; finally, the image parameter adjustment module 34 performs the adjustment processing of the image parameter information on the image region corresponding to the far depth limit value.
- An apparatus for image processing according to Embodiment 3 of the present invention can perform adjustment processing on a partial image in an image frame captured during a video call, and efficiently realizes determination and processing of a target area to be processed, and further increases an image.
- the flexibility of processing effectively improves the display effect of video participants on smart terminals.
- the real-time image obtaining module 31 is specifically configured to: acquire at least two photos by using The current image frame captured by the image headers is respectively subjected to image synthesis processing on the at least two current image frames respectively captured to obtain a current live image frame; wherein each pixel point in the current live image frame has corresponding depth of field information.
- the focused image determining module 32 includes:
- a subject determining unit configured to determine a subject of the current live image frame according to a character image feature, and determine current pixel information constituting the subject; an information determining unit, configured to acquire the previous real scene Determining whether the subject is present in the image frame; a first execution unit, configured to determine historical pixel information constituting the subject in the previous live image frame when the subject is present, and Determining whether the current pixel information matches the historical pixel information, and if not, determining that the position of the subject is changed, determining the subject to be a target focused image; if so, according to each of the subjects The current pixel information determines the average pixel information, and the region corresponding to the average pixel information is determined as the target focused image; and the second execution unit is configured to acquire the preset focused pixel information when the captured person does not exist, The focused pixel information is determined as a target focused image in a corresponding region in the current live image frame.
- the focused image determining module 32 further includes: a subject processing unit, configured to perform brightness enhancement processing on the subject after the subject of the current live image frame is determined according to the character image feature .
- the depth of field determination module 33 is configured to: determine plane coordinate information of the target focused image according to current pixel information of the target focused image in the current live image frame; Determining, according to the depth of field information corresponding to the current pixel information, a depth value of the target focused image; determining an actual focus distance of the target focused image to the camera according to the plane coordinate information and the depth value; The actual focus distance and the acquired camera properties a parameter that determines a depth of field limit value of the current live image frame.
- the image parameter adjustment module 34 is specifically configured to: acquire image parameter information of an image region corresponding to the depth of field limit value, where the image parameter information includes: an image RGB ratio, a color contrast, and an image sharpness; When the image parameter information does not conform to the set standard parameter information, controlling to adjust image brightness, color contrast, and/or image sharpness of the image area, so that the image parameter information conforms to the standard parameter information.
- the fourth embodiment of the present invention further provides an intelligent conference terminal, including: at least two cameras with parallel optical axes, and an apparatus for image processing provided by the foregoing embodiments of the present invention.
- the image processing can be performed by the image processing methods provided in the first embodiment and the second embodiment.
- the smart conference terminal belongs to a type of electronic device having a video call function, wherein the smart conference terminal is integrated with a video call system, and at the same time, at least two cameras with parallel optical axes and the above
- the apparatus for image processing provided by the embodiment.
- the device for image processing provided by the foregoing embodiment of the present invention is integrated in the smart conference terminal, when a video call is made with other smart terminals having a video call function, the partial image in the current live view image frame captured in real time can be performed.
- the adjustment and processing of the image parameter information effectively improves the display effect of the video participant on the intelligent conference terminal, and further improves the user experience of the intelligent conference terminal.
- the storage medium is, for example, a ROM/RAM, a magnetic disk, an optical disk, or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
Disclosed in embodiments of the present invention are an image processing method and device, and an intelligent conferencing terminal. The method comprises: obtaining a current real scene image frame captured by a camera, and determining a target focus image in the current real scene image frame; determining a far limit of depth of field of the current real scene image frame according to the target focus image; and adjusting image parameter information of an image area corresponding to the far limit of depth of field. By means of the method, a local image in an image frame captured during a video call can be adjusted, a target area to be processed can be determined and processed efficiently, the flexibility of image processing is greatly enhanced, and display effects of video participants on intelligent terminals are effectively improved.
Description
本发明涉及图像处理技术领域,尤其涉及一种图像处理的方法、装置及智能会议终端。The present invention relates to the field of image processing technologies, and in particular, to a method, an apparatus, and an intelligent conference terminal for image processing.
目前,智能终端中通常具有视频通话功能,在智能终端与其他智能终端建立连接后,可以基于其具有的视频通话功能进行视频通话。At present, a smart terminal usually has a video call function, and after the smart terminal establishes a connection with other smart terminals, it can perform a video call based on the video call function it has.
一般地,在视频通话时,智能终端通过摄像头对目标对象进行实时捕获形成图像帧,并连续的将捕获的图像帧发送给其它智能终端设备。对于大型的具有视频通话功能的智能终端而言,如智能会议平板,终端自身往往是固定不动的,且一般设置在与窗户相对的位置,基于该智能终端进行视频通话时,参与视频的用户往往处于背光状态,此时,智能终端设备上的摄像头所捕获的图像帧中无法清晰显示用户的图像信息,且用户所处位置越靠近窗户,图像帧中所显示的用户图像信息就越不清晰,由此在向其他智能终端设备发送该图像帧之前,需要对该图像帧中的图像信息进行处理。Generally, during a video call, the smart terminal captures the target object in real time through the camera to form an image frame, and continuously transmits the captured image frame to other intelligent terminal devices. For a large intelligent terminal with a video call function, such as a smart conference tablet, the terminal itself is often fixed and generally disposed at a position opposite to the window, and the user participating in the video is based on the smart terminal performing a video call. It is often in a state of backlight. At this time, the image information of the user cannot be clearly displayed in the image frame captured by the camera on the smart terminal device, and the closer the user is located to the window, the less clear the user image information displayed in the image frame is. Thus, the image information in the image frame needs to be processed before the image frame is sent to other smart terminal devices.
现有技术中对图像信息进行处理时往往是对整体图像的处理,其处理方式存在局限性。In the prior art, when image information is processed, the processing of the entire image is often performed, and the processing manner thereof has limitations.
发明内容Summary of the invention
本发明实施例提供了一种图像处理的方法、装置及智能会议终端,增加了图像处理的灵活性,进而达到了视频通话时清晰显示所捕获图像帧中目标对象的目的。
Embodiments of the present invention provide a method, an apparatus, and an intelligent conference terminal for image processing, which increase the flexibility of image processing, thereby achieving the purpose of clearly displaying a target object in a captured image frame during a video call.
一方面,本发明实施例提供了一种图像处理的方法,包括:In one aspect, an embodiment of the present invention provides a method for image processing, including:
获取通过摄像头捕获的当前实景图像帧,并确定所述当前实景图像帧中的目标聚焦图像;Obtaining a current live image frame captured by the camera and determining a target focused image in the current live image frame;
根据所述目标聚焦图像确定所述当前实景图像帧的景深远界限值;Determining a depth of field limit value of the current live image frame according to the target focused image;
对所述景深远界限值对应的图像区域进行图像参数信息的调节处理。The image region information is subjected to adjustment processing on the image region corresponding to the depth of field limit value.
另一方面,本发明实施例提供了一种图像处理的装置,包括:In another aspect, an embodiment of the present invention provides an apparatus for image processing, including:
实景图像获取模块,用于获取通过摄像头捕获的当前实景图像帧;a real image acquisition module, configured to acquire a current live image frame captured by the camera;
聚焦图像确定模块,用于确定所述当前实景图像帧中的目标聚焦图像;a focused image determining module, configured to determine a target focused image in the current live image frame;
景深界限确定模块,用于根据所述目标聚焦图像确定所述当前实景图像帧的景深远界限值;a depth of field limit determining module, configured to determine a depth of field limit value of the current live image frame according to the target focused image;
图像参数调节模块,用于对所述景深远界限值对应的图像区域进行图像参数信息的调节处理。The image parameter adjustment module is configured to perform image parameter information adjustment processing on the image region corresponding to the depth of field limit value.
又一方面,本发明实施例提供了一种智能会议终端,包括:光轴平行的至少两个摄像头,还包括本发明上述实施例提供的一种图像处理的装置。In another aspect, an embodiment of the present invention provides an intelligent conference terminal, including: at least two cameras having optical axes parallel, and an apparatus for image processing according to the foregoing embodiment of the present invention.
在上述图像处理的方法、装置及智能会议终端中,首先获取通过摄像头捕获的当前实景图像帧,并确定当前实景图像帧中的目标聚焦图像;然后根据目标聚焦图像确定当前实景图像帧的景深远界限值;最终对景深远界限值对应的图像区域进行图像参数信息的调节处理。上述方法、装置以及智能会议终端,能够对视频通话时所捕获图像帧中的局部图像进行调节处理,高效的实现了待处理目标区域的确定以及处理,更好地增加了图像处理的灵活性,有效提升了视频参与者在智能终端上的显示效果。
In the above method, device and intelligent conference terminal for image processing, first, a current live image frame captured by a camera is acquired, and a target focused image in a current live image frame is determined; and then a depth of field of the current live image frame is determined according to the target focused image. The threshold value is finally adjusted for the image parameter information corresponding to the image region corresponding to the depth of field limit value. The above method, device and intelligent conference terminal can adjust the partial image in the image frame captured during the video call, and efficiently realize the determination and processing of the target area to be processed, thereby further increasing the flexibility of image processing. Effectively enhance the display effect of video participants on the smart terminal.
图1为本发明实施例一提供的一种图像处理的方法的流程示意图;FIG. 1 is a schematic flowchart diagram of a method for image processing according to Embodiment 1 of the present invention;
图2为本发明实施例二提供的一种图像处理的方法的流程示意图;2 is a schematic flowchart of a method for image processing according to Embodiment 2 of the present invention;
图3为本发明实施例三提供的一种图像处理的装置的结构框图。FIG. 3 is a structural block diagram of an apparatus for image processing according to Embodiment 3 of the present invention.
下面结合附图和实施例对本发明作进一步的详细说明。可以理解的是,此处所描述的具体实施例仅仅用于解释本发明,而非对本发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部结构。The present invention will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. It should also be noted that, for ease of description, only some, but not all, of the structures related to the present invention are shown in the drawings.
实施例一Embodiment 1
图1为本发明实施例一提供的一种图像处理的方法的流程示意图,该方法适用于视频通话时对所捕获的图像帧进行图像处理的情况,该方法可以由图像处理的装置执行,其中该装置可由软件和/或硬件实现,并一般集成在具有视频通话功能的智能终端上。FIG. 1 is a schematic flowchart of a method for image processing according to a first embodiment of the present invention. The method is applicable to image processing of a captured image frame during a video call, and the method may be performed by an image processing device, where The device can be implemented by software and/or hardware and is generally integrated on a smart terminal having a video call function.
在本实施例中,所述智能终端具体可以是手机、平板电脑、笔记本等智能移动终端,也可以是台式计算机、智能会议终端等固定式的具有视频通话功能的电子设备。In this embodiment, the smart terminal may be a smart mobile terminal such as a mobile phone, a tablet computer, or a notebook, or a fixed electronic device with a video call function such as a desktop computer or a smart conference terminal.
本实施例优选的设定其应用场景为视频通话,对于固定式的智能终端而言,如果其固定放置后的摄像头与室内的窗户相对应,且室外环境的光强度大于室内环境的光强度,则摄像头所捕获当前实景图像帧中的视频参与者将处于背光
状态,有可能无法在当前实景图像帧中清晰显示。由此可根据本实施例提供的图像处理方法确定室内窗户所在的具体图像区域,从而对室内窗户所在图像区域的图像亮度以及图像锐度等图像参数进行调节处理。In this embodiment, the application scenario is preferably a video call. For a fixed smart terminal, if the fixed camera is corresponding to the window in the room, and the light intensity of the outdoor environment is greater than the light intensity of the indoor environment, Then the video participant in the current live image frame captured by the camera will be in backlight
The status may not be clearly displayed in the current live image frame. Therefore, according to the image processing method provided in this embodiment, the specific image area where the indoor window is located is determined, thereby adjusting the image parameters such as image brightness and image sharpness of the image area where the indoor window is located.
如图1所示,本发明实施例一提供的一种图像处理的方法,包括如下操作:As shown in FIG. 1, a method for image processing according to Embodiment 1 of the present invention includes the following operations:
S101、获取通过摄像头捕获的当前实景图像帧,并确定当前实景图像帧中的目标聚焦图像。S101. Acquire a current live image frame captured by the camera, and determine a target focused image in the current live image frame.
在本实施例中,进行视频通话时可以通过摄像头实时对捕获空间的图像进行捕获,从而形成当前实景图像帧。此外,在对捕获空间中的图像进行捕获时,会选择一个被摄体作为目标聚焦图像,本实施例在进行图像捕获时,可以将捕获空间中动态被摄体作为目标聚焦图像,此时需要在当前实景图像帧中确定动态被摄体所对应的图像区域,也可以将预先设定的像素信息所对应的图像作为目标聚焦图像,此时需要对上述预先设定的像素信息在在当前实景图像帧中对应的图像区域进行确定,以作为目标聚焦图像。In this embodiment, when a video call is made, the image of the capture space can be captured by the camera in real time, thereby forming a current live image frame. In addition, when capturing an image in the capture space, one subject is selected as the target focused image. In the embodiment, when the image capture is performed, the dynamic subject in the capture space can be used as the target focused image. The image area corresponding to the dynamic subject is determined in the current real image frame, and the image corresponding to the preset pixel information may be used as the target focused image. In this case, the preset pixel information needs to be in the current real scene. The corresponding image area in the image frame is determined to be the target focused image.
S102、根据目标聚焦图像确定所述当前实景图像帧的景深远界限值。S102. Determine a depth of field limit value of the current live image frame according to the target focused image.
在本实施例中,根据上述步骤确定的目标聚焦图像,可以确定该目标聚焦图像到摄像头前节点的实际距离,该实际距离相当于摄像头此时的聚焦距离,在本实施例中,所述聚焦距离可以根据图像聚焦图像的当前像素信息及对应的景深信息来确定,此外,根据所述聚焦距离以及摄像头的属性参数就可以确定摄像头所捕获图像帧的景深范围。In this embodiment, according to the target focused image determined by the above steps, the actual distance of the target focused image to the front node of the camera may be determined, and the actual distance is equivalent to the focus distance of the camera at this time. In this embodiment, the focus is The distance may be determined according to current pixel information of the image focused image and corresponding depth of field information, and further, the depth of field range of the image frame captured by the camera may be determined according to the focus distance and the attribute parameter of the camera.
一般地,该景深范围由景深近界限值和景深远界限值形成,所述景深近界限值能够显示在当前实景图像帧中的图像与摄像头之间的最近距离;所述景深远界限值具体可看作能够显示在当前实景图像帧中的图像与摄像头之间的最远
距离,因此,确定根据其确定的景深范围,就可确定所述当前实景图像帧的景深远界限值。Generally, the depth of field range is formed by a depth of field near threshold value and a depth of field limit value, and the depth of field near threshold value can display the closest distance between the image in the current live image frame and the camera; the depth of field limit value may specifically Think of the farthest between the image and the camera that can be displayed in the current live image frame
The distance, therefore, determines the depth of field limit value of the current live image frame based on the determined depth of field range.
S103、对景深远界限值对应的图像区域进行图像参数信息的调节处理。S103: Perform image coordinate information adjustment processing on the image region corresponding to the depth of field limit value.
本步骤中,当前实景图像帧可理解为一个具有景深信息的图像帧,在确定景深远界限值后,可以在当前实景图像帧中确定所述景深远界限值对应的图像区域,进而对确定的图像区域根据其图像参数信息进行调解处理。In this step, the current real image frame can be understood as an image frame having depth of field information. After determining the depth of field limit value, the image region corresponding to the far depth limit value of the depth of field can be determined in the current real image frame, and then determined. The image area is subjected to mediation processing based on its image parameter information.
示例性地,对于固定式的智能终端而言,当其固定放置后的摄像头与室内的窗户相对且进行视频通话时,为减少所捕获的当前实景画面帧中室内窗户的光强度对视频参与者显示画面的影响,可以通过本步骤将景深远界限值对应图像区域看作室内窗户的所在区域,由此可对所确定的图像区域进行局部的调节处理,进而达到清晰显示视频参与者的目的。Illustratively, for a stationary smart terminal, when the fixedly placed camera is opposite to the window in the room and a video call is made, the video participant is reduced in order to reduce the light intensity of the indoor window in the captured current live picture frame. The effect of the display screen can be regarded as the area where the indoor window is located by using the image area corresponding to the far depth limit value, thereby locally adjusting the determined image area, thereby achieving the purpose of clearly displaying the video participants.
本发明实施例一提供的一种图像处理的方法,该方法首先获取通过摄像头捕获的当前实景图像帧,并确定当前实景图像帧中的目标聚焦图像;然后根据目标聚焦图像确定当前实景图像帧的景深远界限值;最终对景深远界限值对应的图像区域进行图像参数信息的调节处理。利用该方法,能够对视频通话时所捕获图像帧中的局部图像进行调节处理,由此达到清晰显示视频参与者的目的,更好地增加了图像处理的灵活性。A method for image processing according to Embodiment 1 of the present invention first acquires a current live image frame captured by a camera, and determines a target focused image in a current live image frame; and then determines a current live image frame according to the target focused image. The depth of field is far from the limit value; finally, the image parameter information is adjusted to the image area corresponding to the depth of field limit value. By using the method, the partial image in the image frame captured during the video call can be adjusted, thereby achieving the purpose of clearly displaying the video participant, and the flexibility of image processing is better.
实施例二Embodiment 2
图2为本发明实施例二提供的一种图像处理的方法的流程示意图。本发明实施例以上述实施例为基础进行优化,在本实施例中,进一步将获取通过摄像头捕获的当前实景图像帧具体优化为:获取通过至少两个摄像头分别捕获的当
前图像帧;对分别捕获的至少两张当前图像帧进行图像合成处理,获得当前实景图像帧;其中,所述当前实景图像帧中各像素点具有相应的景深信息。FIG. 2 is a schematic flowchart diagram of a method for image processing according to Embodiment 2 of the present invention. The embodiment of the present invention is optimized based on the foregoing embodiment. In this embodiment, the current real-life image frame captured by the camera is further optimized to be: captured by at least two cameras respectively.
a pre-image frame; performing image synthesizing processing on the at least two current image frames respectively captured to obtain a current real-image frame; wherein each pixel in the current live image frame has corresponding depth-of-field information.
在上述优化的基础上,还将确定所述当前实景图像帧中的目标聚焦图像具体化为:根据人物图像特征确定所述当前实景图像帧中的被摄人物,并确定组成所述被摄人物的当前像素信息;在已获取的前一实景图像帧中确定是否存在所述被摄人物;如果存在所述被摄人物,则在所述前一实景图像帧中确定组成所述被摄人物的历史像素信息,并判定所述当前像素信息是否与所述历史像素信息匹配,若否,则确定所述被摄人物的位置发生变化,将所述被摄人物确定为目标聚焦图像;若是,则根据各被摄人物的当前像素信息确定平均像素信息,将所述平均像素信息对应的区域确定为目标聚焦图像;如果不存在所述被摄人物,则获取预设的聚焦像素信息,将所述聚焦像素信息在所述当前实景图像帧中对应的区域确定为目标聚焦图像。On the basis of the above optimization, the target focused image in the current real image frame is also determined to be: determining a subject in the current live image frame according to the character image feature, and determining to form the taken person Current pixel information; determining whether the subject is present in the acquired previous live image frame; if the subject is present, determining the composition of the subject in the previous live image frame Historical pixel information, and determining whether the current pixel information matches the historical pixel information, and if not, determining that the position of the subject is changed, determining the subject to be a target focused image; if yes, Determining average pixel information according to current pixel information of each of the captured persons, determining an area corresponding to the average pixel information as a target focused image; if the captured person does not exist, acquiring preset focused pixel information, The corresponding area of the focused pixel information in the current live image frame is determined as the target focused image.
进一步地,所述根据所述目标聚焦图像确定所述当前实景图像帧的景深远界限值,具体可优化为:根据所述目标聚焦图像在所述当前实景图像帧中的当前像素信息,确定所述目标聚焦图像的平面坐标信息;根据所述当前像素信息对应的景深信息,确定所述目标聚焦图像的深度值;根据所述平面坐标信息及所述深度值,确定所述目标聚焦图像到摄像头的实际聚焦距离;根据所述实际聚焦距离及获取的摄像头属性参数,确定所述当前实景图像帧的景深远界限值。Further, determining the depth of field limit value of the current live image frame according to the target focused image may be optimized to: determine, according to current pixel information of the target focused image in the current live image frame, a plane coordinate information of the target focused image; determining a depth value of the target focused image according to the depth information corresponding to the current pixel information; determining the target focused image to the camera according to the plane coordinate information and the depth value The actual focus distance; determining the depth of field limit value of the current live image frame according to the actual focus distance and the acquired camera attribute parameter.
此外,本实施还将对所述景深远界限值对应的图像区域进行图像参数信息的调节处理,具体优化为:获取所述景深远界限值对应的图像区域的图像参数信息,所述图像参数信息包括:图像RGB占比、色彩对比度以及图像锐度;在所述图像参数信息不符合设定的标准参数信息时,控制调节所述图像区域的图
像亮度、色彩对比度和/或图像锐度,以使所述图像参数信息符合所述标准参数信息。In addition, the present embodiment further performs an adjustment process of the image parameter information on the image region corresponding to the depth of field limit value, and is specifically optimized to: acquire image parameter information of the image region corresponding to the depth of field limit value, the image parameter information. The method includes: an image RGB ratio, a color contrast, and an image sharpness; and when the image parameter information does not meet the set standard parameter information, controlling to adjust the image region
Image brightness, color contrast, and/or image sharpness are such that the image parameter information conforms to the standard parameter information.
如图2所示,本发明实施例二提供一种图像处理的方法,具体包括如下操作:As shown in FIG. 2, a second embodiment of the present invention provides a method for image processing, which specifically includes the following operations:
S201、获取通过至少两个摄像头分别捕获的当前图像帧。S201. Acquire a current image frame respectively captured by at least two cameras.
一般地,为获取所捕获图像帧的景深信息,需要捕获具有立体空间感的图像帧,由此可以采用光轴平行设置的至少两个摄像头分别实时的从不同角度进行图像捕获。Generally, in order to acquire the depth of field information of the captured image frame, it is necessary to capture an image frame having a stereoscopic space, whereby at least two cameras arranged in parallel with the optical axis can perform image capturing from different angles in real time.
在本实施例中,采用多个摄像头进行图像捕获时,所采用的至少两个摄像头在智能终端上的设置位置存在不同,对于同一被摄体而言,该被摄体在不同摄像头所捕获图像帧中的像素位置存在不同,进而可以根据不同的像素位置信息确定被摄体的景深信息。In the embodiment, when the image capturing is performed by using a plurality of cameras, the setting positions of the at least two cameras used on the smart terminal are different, and for the same subject, the image captured by the object in different cameras is different. The pixel positions in the frame are different, and the depth of field information of the object can be determined according to different pixel position information.
S202、对分别捕获的至少两张当前图像帧进行图像合成处理,获得当前实景图像帧。S202. Perform image synthesis processing on at least two current image frames respectively captured to obtain a current real image frame.
本步骤可以对不同摄像头捕获的当前图像帧进行合成,从而得到具有立体空间感的当前实景图像帧。可以理解的是,所合成的当前实景图像帧中各像素点均具有相应的景深信息,具体地,各像素点景深信息的确定过程可描述为:对不同摄像头所捕获的当前图像帧进行立体匹配,从而获得同一对应点在不同当前图像帧中的视差值,之后可根据视差值与深度的关系,确定不同像素点的景深信息。In this step, the current image frames captured by different cameras can be combined to obtain a current live image frame having a stereoscopic sense. It can be understood that each pixel in the synthesized current real image frame has corresponding depth of field information. Specifically, the process of determining the depth information of each pixel may be described as: stereo matching the current image frame captured by different cameras. Therefore, the disparity values of the same corresponding points in different current image frames are obtained, and then the depth information of different pixel points can be determined according to the relationship between the disparity values and the depths.
在本实施例中,可以对当前实景图像帧中各像素点的景深信息进行存储,以用于后续待处理图像区域的选择。
In this embodiment, the depth information of each pixel in the current live image frame may be stored for selection of a subsequent image area to be processed.
S203、根据人物图像特征确定当前实景图像帧中的被摄人物,并确定组成该被摄人物的当前像素信息。S203. Determine a subject in the current live image frame according to the character image feature, and determine current pixel information that constitutes the subject.
在本实施例中,步骤S203至步骤S209具体给出了目标聚焦图像的确定过程。本步骤具体通过预设的人物图像特征来识别确定当前实景图像帧中包含的被摄人物。一般地,视频通话时,摄像头捕获的当前实景图像帧中往往存在一个或多个被摄人物,由此可以根据人物图像特征识别出当前实景图像帧中具体包含的被摄人物数,且在识别存在被摄人物后,还可以确定每个被摄人物在当前实景图像帧中的当前像素信息,所述当前像素信息具体可理解为组成一个被摄人物的所有像素点的像素值范围。In the present embodiment, steps S203 to S209 specifically specify the determination process of the target focused image. This step specifically identifies, by the preset character image feature, the determined person included in the current live image frame. Generally, when a video call is made, one or more subjects are often present in the current live image frame captured by the camera, so that the number of the selected persons actually included in the current live image frame can be identified according to the image characteristics of the person, and the recognition is performed. After the subject is present, the current pixel information of each subject in the current live image frame can also be determined, and the current pixel information can be specifically understood as a range of pixel values of all the pixels constituting one subject.
S204、在已获取的前一实景图像帧中确定是否存在该被摄人物,若是,执行步骤S205;若否,执行步骤S209。S204. Determine whether the subject is present in the acquired previous live image frame. If yes, go to step S205; if no, go to step S209.
本步骤可用来判定当前实景图像帧中的被摄人物是否也出现在前一实景图像帧中,一般地,不同的被摄人物自身具有区别于其他被摄人物的特征(如被摄人物的衣服颜色以及佩戴饰物等),因此可以根据当前实景图像帧中被摄人物自身具有的特征来确定该被摄人物是否存在于前一实景图像帧中,当前一实景图像帧中不存在所判定的被摄人物时,可以进行步骤S209的操作;如果存在所判定的被摄人物时,可以执行步骤S205的操作。This step can be used to determine whether the subject in the current live image frame also appears in the previous live image frame. Generally, different captured characters themselves have characteristics that are different from other captured characters (such as the clothes of the captured person). Color and wearing ornaments, etc., so it is possible to determine whether the subject is present in the previous real image frame according to the characteristics of the subject in the current live image frame, and the determined scene is not present in the current live image frame. When the person is photographed, the operation of step S209 can be performed; if there is the determined subject person, the operation of step S205 can be performed.
S205、在该前一实景图像帧中确定组成该被摄人物的历史像素信息。S205. Determine historical pixel information constituting the subject in the previous real image frame.
本步骤在前一实景图像帧中确定存在所判定的被摄人物后,可以确定该被摄人物在前一实景图像帧中的像素位置,其所具有的像素位置可记为该被摄人物的历史像素信息。After determining that the determined subject is present in the previous real image frame, the step of determining the pixel position of the subject in the previous real image frame may be determined as the pixel position of the subject. Historical pixel information.
S206、判定当前像素信息是否与历史像素信息匹配,若否,则执行步骤
S207;若是,则执行步骤S208。S206. Determine whether the current pixel information matches the historical pixel information. If not, perform the step.
S207; if yes, step S208 is performed.
需要说明的是,当用于视频通话的智能终端的位置固定不变时,其智能终端的摄像头所对应的捕获空间不会发生变化,且本步骤可将确定的该被摄人物的历史像素信息与当前像素信息进行匹配。It should be noted that when the location of the smart terminal for the video call is fixed, the capture space corresponding to the camera of the smart terminal does not change, and the step may determine the historical pixel information of the determined person. Matches the current pixel information.
在本实施例中,如果被摄人物处于活动动态,则其在前一实景图像帧中的历史像素信息与在当前实景图像帧中的当前像素信息不能完全匹配,此时可执行步骤S207的操作;如果被摄人物处于静止状态,其历史像素信息与当前像素信息存在匹配的可能,此时可执行步骤S208的操作。In this embodiment, if the subject is in the active dynamics, the historical pixel information in the previous real image frame and the current pixel information in the current live image frame cannot be completely matched, and the operation of step S207 can be performed. If the subject is in a stationary state, and its historical pixel information has a possibility of matching with the current pixel information, the operation of step S208 can be performed at this time.
S207、确定该被摄人物的位置发生变化,将该被摄人物确定为目标聚焦图像,之后执行步骤S210。S207. It is determined that the position of the subject is changed, and the subject is determined as the target focused image, and then step S210 is performed.
在本实施例中,当被摄物体的历史像素信息与当前像素信息不匹配时,可确定存在被摄人物发生了变化,此时可将被摄人物确定为目标聚焦图像,并可在确定目标聚焦图像后进行步骤S210的操作。In this embodiment, when the historical pixel information of the object does not match the current pixel information, it may be determined that there is a change in the subject, and the subject may be determined as the target focused image, and the target may be determined. After the image is focused, the operation of step S210 is performed.
需要说明的是,如果当前实景图像帧中存在多个位置发生变化的被摄人物,可选取历史像素信息与当前像素信息匹配程度最低的被摄人物作为目标聚焦图像。示例性地,所述历史像素信息与当前像素信息的匹配程度具体可根据匹配的像素点个数确定,所匹配的像素点个数越小,其具有的匹配程度越低。It should be noted that if there are a plurality of subjects whose positions have changed in the current live image frame, the subject with the lowest degree of matching of the historical pixel information and the current pixel information may be selected as the target focused image. For example, the degree of matching between the historical pixel information and the current pixel information may be specifically determined according to the number of matching pixel points, and the smaller the number of matched pixel points, the lower the matching degree.
S208、根据各被摄人物的当前像素信息确定平均像素信息,将平均像素信息对应的区域确定为目标聚焦图像,之后执行步骤S210。S208. Determine average pixel information according to current pixel information of each subject, and determine an area corresponding to the average pixel information as the target focused image, and then perform step S210.
在本实施例中,如果当前实景图像帧中的各被摄人物的当前像素信息与历史像素信息均匹配,可确定被摄人物静止不动,本步骤可根据各被摄人物的当前像素信息,确定当前实景图像帧中所有被摄人物的平均像素信息,由此可将
平均像素信息对应的区域确定为目标聚焦图像,并可在确定目标聚焦图像后进行步骤S210的操作。In this embodiment, if the current pixel information of each of the captured persons in the current live image frame matches the historical pixel information, it may be determined that the subject is still, and the step may be based on current pixel information of each of the captured characters. Determining the average pixel information of all the subjects in the current live image frame, thereby
The area corresponding to the average pixel information is determined as the target focused image, and the operation of step S210 can be performed after the target focused image is determined.
S209、获取预设的聚焦像素信息,将聚焦像素信息在所述当前实景图像帧中对应的区域确定为目标聚焦图像,之后执行步骤S210。S209. The preset focus pixel information is acquired, and the corresponding region of the focused pixel information in the current live image frame is determined as the target focused image, and then step S210 is performed.
本步骤对前一实景图像帧中不存在被摄人物体的情况进行处理,其不存在被摄物体的情况一般为所捕获的当前实景图像帧为所捕获的第一帧,不存在前一实景图像帧;或者,所捕获的前一实景图像帧真的不存在被摄人物。This step processes the case where there is no object in the previous real image frame. The case where there is no object is generally that the captured current live image frame is the captured first frame, and there is no previous real scene. Image frame; or, the captured previous live image frame does not actually have a subject.
在本实施例中,当符合上述不存在被摄人物的情况时,可以获取预先设定的聚焦像素信息,然后在当前实景图像帧中确定与聚焦像素信息相对应的区域,直接将所确定的区域作为目标聚焦图像,并可在确定目标聚焦图像后进行步骤S210的操作。In this embodiment, when the situation in which the subject is not present is met, the preset focus pixel information may be acquired, and then the area corresponding to the focused pixel information is determined in the current live image frame, and the determined The area is focused on the image as the target, and the operation of step S210 can be performed after the target focused image is determined.
需要说明的是,设置在智能终端上的摄像头的可捕获范围一般固定不变,由此本实施例可以根据历史的图像帧捕获时所确定聚焦图像对应的像素信息来设定所述聚焦像素信息。It should be noted that the captureable range of the camera disposed on the smart terminal is generally fixed, so that the present embodiment can set the focused pixel information according to the pixel information corresponding to the focused image determined during the capture of the historical image frame. .
S210、根据所述目标聚焦图像在所述当前实景图像帧中的当前像素信息,确定所述目标聚焦图像的平面坐标信息。S210. Determine plane coordinate information of the target focused image according to current pixel information of the target focused image in the current live image frame.
在本实施例中,所述当前实景图像帧由至少两个摄像头捕获的当前图像帧合成,当前实景图像帧包含了各图像的空间信息(显示在屏幕上的平面坐标信息以及呈现立体感的深度值)。In this embodiment, the current live image frame is synthesized by the current image frame captured by at least two cameras, and the current live image frame includes spatial information of each image (plane coordinate information displayed on the screen and depth of rendering stereoscopic effect) value).
本实施例可以根据目标聚焦图像的当前像素信息确定其对应的平面坐标信息,具体地,可以根据当前像素信息中各像素点的像素坐标值确定一个平均像素坐标值,本实施例可将该平均像素坐标值看作目标聚焦图像的平面坐标信息。
The embodiment may determine the corresponding plane coordinate information according to the current pixel information of the target focused image. Specifically, an average pixel coordinate value may be determined according to the pixel coordinate value of each pixel point in the current pixel information, which may be averaged in this embodiment. The pixel coordinate value is regarded as the plane coordinate information of the target focused image.
S211、根据所述当前像素信息对应的景深信息,确定所述目标聚焦图像的深度值。S211. Determine a depth value of the target focused image according to the depth information corresponding to the current pixel information.
示例性地,本实施例根据预先存储的像素点和景深信息的对应表,可以确定该平均像素坐标值对应的景深信息,并将该景深信息作为目标聚焦图像的深度值。Illustratively, according to the correspondence table of pixel points and depth information stored in advance, the present embodiment can determine the depth information corresponding to the average pixel coordinate value, and use the depth information as the depth value of the target focused image.
S212、根据所述平面坐标信息及所述深度值,确定所述目标聚焦图像到摄像头的实际聚焦距离。S212. Determine an actual focus distance of the target focused image to the camera according to the plane coordinate information and the depth value.
在本实施例中,根据该平面坐标信息以及深度值可以确定该目标聚焦图像在立体空间中的投射点,具体的,以智能终端的屏幕的左上角像素原点,在确定目标聚焦图像在立体空间中的投射点后,可以根据其平面坐标信息以及深度值确定该投射点到像素原点的实际距离值,计算出的实际距离值就可看做目标聚焦图像到摄像头的实际聚焦距离。In this embodiment, the projection point of the target focused image in the stereoscopic space may be determined according to the plane coordinate information and the depth value, specifically, the pixel origin of the upper left corner of the screen of the smart terminal, and the target focused image is determined in the stereoscopic space. After the projection point in the middle, the actual distance value of the projection point to the pixel origin can be determined according to the plane coordinate information and the depth value, and the calculated actual distance value can be regarded as the actual focus distance of the target focused image to the camera.
S213、根据所述实际聚焦距离及获取的摄像头属性参数,确定所述当前实景图像帧的景深远界限值。S213. Determine a depth of field limit value of the current live image frame according to the actual focus distance and the acquired camera attribute parameter.
在本实施例中,所述摄像头属性参数可以包括:超焦点距离以及镜头焦距,其中,超焦点距离和镜头焦距均与所使用的摄像头类型决定。具体地,根据所述实际聚焦距离以及获取的摄像头属性参数,以及景深近界限的计算公式和景深远界限的计算公式可以确定当前实景图像帧的景深近界限值和景深远界限值,其中,S近表示景深近界限值,S远表示景深远界限值,H表示摄像头的超焦点距离,D表示分别实际聚焦距离,F表示摄像头的镜头焦距。In this embodiment, the camera attribute parameters may include: a hyper focus distance and a lens focal length, wherein the hyper focus distance and the lens focal length are both determined by the type of camera used. Specifically, according to the actual focus distance and the acquired camera attribute parameters, and the calculation formula of the depth of field near the boundary And the formula for calculating the far depth of the depth of field To determine the current real image frame depth near the limit value and King reaching the limit value, wherein, S near represents depth near the limit value, S away showing scene reaching the limit value, H denotes hyperfocal camera distance, D denote the actual focusing distance, F represents the lens focal length of the camera.
示例性地,当摄像头属性参数为:f8的超焦点距离是6.25米(模糊圈标准
为0.05mm),摄像头的镜头焦距为50毫米时,如果所述实景聚焦距离为4米,则根据上述公式可确定其具有的景深近界限值为2.45米,还可确定其具有的景深远界限值为11.36米。Illustratively, when the camera property parameter is: f8, the hyperfocal distance is 6.25 meters (the blurring circle standard)
When the lens focal length of the camera is 50 mm, if the real-focus distance is 4 meters, it can be determined according to the above formula that the depth of field has a near-limit value of 2.45 meters, and it is also possible to determine the depth of field limit. The value is 11.36 meters.
S214、获取景深远界限值对应的图像区域的图像参数信息,该图像参数信息包括:图像RGB占比、色彩对比度以及图像锐度。S214. Acquire image parameter information of an image region corresponding to a depth of field limit value, where the image parameter information includes: an image RGB ratio, a color contrast, and an image sharpness.
在本实施例中,所述景深远界限值相当于摄像头所能捕获图像的最远距离,对应于所述当前实景图像帧中的最远图像区域,本实施例可以根据其景深远界限值确定该图像区域,并可以获取该图像区域的图像参数信息,如图像RGB占比、色彩对比度以及图像锐度。In this embodiment, the depth of field limit value is equivalent to the farthest distance that the camera can capture the image, and corresponds to the farthest image area in the current real image frame. The embodiment may determine the far depth limit value according to the depth of field. The image area and image parameter information of the image area, such as image RGB ratio, color contrast, and image sharpness.
在本实施例中,所述图像RGB占比可用于确定图像区域的亮度值;所述色彩对比度可以是对图像区域中明暗区域最亮的白和最暗的黑之间不同亮度层级的测量,其差异范围越大代表色彩对比度越大,差异范围越小代表色彩对比度越小;所述图像锐度具体可理解为反映图像平面清晰度和图像边缘锐利程度的指标,其图像锐度越高,图像平面上的细节对比度也更高,看起来更清楚。In this embodiment, the image RGB ratio can be used to determine the brightness value of the image region; the color contrast can be a measure of different brightness levels between the brightest white and the darkest black in the light and dark regions of the image region. The larger the difference range is, the larger the color contrast is, and the smaller the difference range is, the smaller the color contrast is. The image sharpness can be understood as an index reflecting the image plane sharpness and the image edge sharpness, and the image sharpness is higher. The detail contrast on the image plane is also higher and looks more clear.
S215、在图像参数信息不符合设定的标准参数信息时,控制调节图像区域的图像亮度、色彩对比度和/或图像锐度,以使图像参数信息符合标准参数信息。S215. Control, when the image parameter information does not meet the set standard parameter information, adjust image brightness, color contrast, and/or image sharpness of the image area, so that the image parameter information conforms to the standard parameter information.
在本实施例中,可以将所述图像参数信息与设定的标准参数信息进行比对,根据其比对结果分别对图像亮度、色彩对比度和/或图像锐度进行调节处理,最终使图像参数信息符合所述标准参数信息。In this embodiment, the image parameter information may be compared with the set standard parameter information, and the image brightness, the color contrast, and/or the image sharpness are respectively adjusted according to the comparison result, and finally the image parameters are finally obtained. The information conforms to the standard parameter information.
可以理解的是,如果景深远界限值对应的图像区域为亮度较高的窗户图像,则经过图像参数信息调节后,就可适当降低窗户图像的显示亮度,进而可以达到在当前实景图像帧中清晰显示视频参与者图像信息的目的。
It can be understood that if the image area corresponding to the far depth limit value is a window image with higher brightness, after the image parameter information is adjusted, the display brightness of the window image can be appropriately reduced, thereby achieving clearness in the current real image frame. The purpose of displaying video participant image information.
本发明实施例二提供的一种图像处理的方法,具体化了图像帧的获取过程,同时具体化了目标聚焦图像的确定过程,还具体化了景深远界限值的确定过程以及对景深远界限值所对应图像区域的调解处理过程。该方法能够获取双摄像头捕获合成的图像帧,并可以根据所合成图像帧的景深信息及确定的目标聚焦图像确定图像帧的景深远界限值,由此可以对景深远界限值对应的区域进行图像调解处理。利用该方法,高效的实现了待处理目标区域的确定以及处理,有效避免了对整个图像帧的整体处理,更好地增加了图像处理的灵活性,同时提高了视频通话时的图像处理效率,进而提升了视频参与者在智能终端上的显示效果。The method for image processing provided by the second embodiment of the present invention embodies the process of acquiring the image frame, and at the same time, the process of determining the target focused image is embodied, and the process of determining the depth threshold of the depth of field and the far boundary of the depth of field are also embodied. The mediation process of the image area corresponding to the value. The method can acquire the image frame synthesized by the dual camera capture, and can determine the depth of field limit value of the image frame according to the depth information of the synthesized image frame and the determined target focus image, thereby performing an image on the region corresponding to the far depth limit value of the depth of field Mediation processing. By using the method, the determination and processing of the target area to be processed are efficiently realized, the overall processing of the entire image frame is effectively avoided, the flexibility of image processing is better, and the image processing efficiency during video call is improved. This further enhances the display effect of video participants on the smart terminal.
在上述实施例的基础上,本实施例在所述根据人物图像特征确定所述当前实景图像帧的被摄人物之后,还优化增加了:对所述被摄人物进行亮度提升处理。On the basis of the foregoing embodiment, after determining the subject of the current live image frame according to the character image feature, the embodiment further optimizes: performing brightness enhancement processing on the subject.
需要说明的是,基于本实施例上述的图像处理,可以实现景深远界限值所对应图像区域的调节处理,使得当前实景图像帧具有清晰的视频参与者的图像。此外,由于在当前实景图像帧中识别出的被摄人物就可看做视频参与者,所以在对所选中图像区域进行处理的同时,还可以直接对识别出的被摄人物进行亮度提升处理。It should be noted that, based on the image processing described above in the embodiment, the adjustment processing of the image region corresponding to the depth of field limit value can be realized, so that the current live image frame has a clear image of the video participant. In addition, since the recognized person recognized in the current live image frame can be regarded as a video participant, the selected image area can be processed directly, and the recognized subject can be directly subjected to brightness enhancement processing.
具体地,也可以根据被摄人物的当前像素信息和对应的景深信息确定具体的待处理区域,之后确定待处理区域的图像参数信息,并对图像参数信息进行调节,以使被摄人物的亮度有所提升,使得被摄人物能够在当前实景图像帧中具有更好的显示效果。
Specifically, the specific pixel to be processed may be determined according to the current pixel information of the captured person and the corresponding depth information, and then the image parameter information of the to-be-processed area is determined, and the image parameter information is adjusted to improve the brightness of the captured person. Improved, so that the subject can have a better display in the current live image frame.
实施例三Embodiment 3
图3为本发明实施例三提供的一种图像处理的装置的结构框图。该装置适用于视频通话时对所捕获的图像帧进行图像处理的情况,其中该装置可由软件和/或硬件实现,并一般集成在具有视频通话功能的智能终端上。如图3所示,该装置包括:实景图像获取模块31、聚焦图像确定模块32、景深界限确定模块33以及图像参数调节模块34。FIG. 3 is a structural block diagram of an apparatus for image processing according to Embodiment 3 of the present invention. The device is suitable for image processing of captured image frames during a video call, wherein the device can be implemented by software and/or hardware and is generally integrated on a smart terminal having a video call function. As shown in FIG. 3, the apparatus includes: a live image acquisition module 31, a focused image determination module 32, a depth of field limit determination module 33, and an image parameter adjustment module 34.
其中,实景图像获取模块31,用于获取通过摄像头捕获的当前实景图像帧;The real image acquisition module 31 is configured to acquire a current real image frame captured by the camera;
聚焦图像确定模块32,用于确定所述当前实景图像帧中的目标聚焦图像;a focused image determining module 32, configured to determine a target focused image in the current live image frame;
景深界限确定模块33,用于根据所述目标聚焦图像确定所述当前实景图像帧的景深远界限值;a depth of field limit determining module 33, configured to determine a depth of field limit value of the current live image frame according to the target focused image;
图像参数调节模块34,用于对所述景深远界限值对应的图像区域进行图像参数信息的调节处理。The image parameter adjustment module 34 is configured to perform image parameter information adjustment processing on the image region corresponding to the depth of field limit value.
在本实施例中,该装置首先通过实景图像获取模块31获取通过摄像头捕获的当前实景图像帧;然后通过聚焦图像确定模块32确定当前实景图像帧中的目标聚焦图像;之后通过景深界限确定模块33根据目标聚焦图像确定当前实景图像帧的景深远界限值;最终通过图像参数调节模块34对景深远界限值对应的图像区域进行图像参数信息的调节处理。In the present embodiment, the device first acquires the current live image frame captured by the camera through the live image acquisition module 31; then determines the target focused image in the current live image frame by the focused image determination module 32; and then passes the depth of field determination module 33 Determining the depth of field limit value of the current live image frame according to the target focused image; finally, the image parameter adjustment module 34 performs the adjustment processing of the image parameter information on the image region corresponding to the far depth limit value.
本发明实施例三提供的一种图像处理的装置,能够对视频通话时所捕获图像帧中的局部图像进行调节处理,高效的实现了待处理目标区域的确定以及处理,更好地增加了图像处理的灵活性,有效提升了视频参与者在智能终端上的显示效果。An apparatus for image processing according to Embodiment 3 of the present invention can perform adjustment processing on a partial image in an image frame captured during a video call, and efficiently realizes determination and processing of a target area to be processed, and further increases an image. The flexibility of processing effectively improves the display effect of video participants on smart terminals.
进一步地,所述实景图像获取模块31,具体用于:获取通过上至少两个摄
像头分别捕获的当前图像帧;对分别捕获的至少两张当前图像帧进行图像合成处理,获得当前实景图像帧;其中,所述当前实景图像帧中各像素点具有相应的景深信息。Further, the real-time image obtaining module 31 is specifically configured to: acquire at least two photos by using
The current image frame captured by the image headers is respectively subjected to image synthesis processing on the at least two current image frames respectively captured to obtain a current live image frame; wherein each pixel point in the current live image frame has corresponding depth of field information.
在上述优化的基础上,所述聚焦图像确定模块32,包括:On the basis of the above optimization, the focused image determining module 32 includes:
被摄人物确定单元,用于根据人物图像特征确定所述当前实景图像帧的被摄人物,并确定组成所述被摄人物的当前像素信息;信息判定单元,用于在已获取的前一实景图像帧中确定是否存在所述被摄人物;第一执行单元,用于当存在所述被摄人物时,在所述前一实景图像帧中确定组成所述被摄人物的历史像素信息,并判定所述当前像素信息是否与所述历史像素信息匹配,若否,则确定所述被摄人物的位置发生变化,将所述被摄人物确定为目标聚焦图像;若是,则根据各被摄人物的当前像素信息确定平均像素信息,将所述平均像素信息对应的区域确定为目标聚焦图像;第二执行单元,用于当不存在所述被摄人物时,获取预设的聚焦像素信息,将所述聚焦像素信息在所述当前实景图像帧中对应的区域确定为目标聚焦图像。a subject determining unit, configured to determine a subject of the current live image frame according to a character image feature, and determine current pixel information constituting the subject; an information determining unit, configured to acquire the previous real scene Determining whether the subject is present in the image frame; a first execution unit, configured to determine historical pixel information constituting the subject in the previous live image frame when the subject is present, and Determining whether the current pixel information matches the historical pixel information, and if not, determining that the position of the subject is changed, determining the subject to be a target focused image; if so, according to each of the subjects The current pixel information determines the average pixel information, and the region corresponding to the average pixel information is determined as the target focused image; and the second execution unit is configured to acquire the preset focused pixel information when the captured person does not exist, The focused pixel information is determined as a target focused image in a corresponding region in the current live image frame.
进一步地,聚焦图像确定模块32,还包括:被摄人物处理单元,用于在所述根据人物图像特征确定所述当前实景图像帧的被摄人物之后,对所述被摄人物进行亮度提升处理。Further, the focused image determining module 32 further includes: a subject processing unit, configured to perform brightness enhancement processing on the subject after the subject of the current live image frame is determined according to the character image feature .
在上述实施例的基础上,所述景深界限确定模块33,具体用于:根据所述目标聚焦图像在所述当前实景图像帧中的当前像素信息,确定所述目标聚焦图像的平面坐标信息;根据所述当前像素信息对应的景深信息,确定所述目标聚焦图像的深度值;根据所述平面坐标信息及所述深度值,确定所述目标聚焦图像到所述摄像头的实际聚焦距离;根据所述实际聚焦距离及获取的摄像头属性
参数,确定所述当前实景图像帧的景深远界限值。On the basis of the foregoing embodiment, the depth of field determination module 33 is configured to: determine plane coordinate information of the target focused image according to current pixel information of the target focused image in the current live image frame; Determining, according to the depth of field information corresponding to the current pixel information, a depth value of the target focused image; determining an actual focus distance of the target focused image to the camera according to the plane coordinate information and the depth value; The actual focus distance and the acquired camera properties
a parameter that determines a depth of field limit value of the current live image frame.
进一步地,所述图像参数调节模块34,具体用于:获取所述景深远界限值对应的图像区域的图像参数信息,所述图像参数信息包括:图像RGB占比、色彩对比度以及图像锐度;在所述图像参数信息不符合设定的标准参数信息时,控制调节所述图像区域的图像亮度、色彩对比度和/或图像锐度,以使所述图像参数信息符合所述标准参数信息。Further, the image parameter adjustment module 34 is specifically configured to: acquire image parameter information of an image region corresponding to the depth of field limit value, where the image parameter information includes: an image RGB ratio, a color contrast, and an image sharpness; When the image parameter information does not conform to the set standard parameter information, controlling to adjust image brightness, color contrast, and/or image sharpness of the image area, so that the image parameter information conforms to the standard parameter information.
实施例四Embodiment 4
本发明实施例四还提供了一种智能会议终端,包括:光轴平行的至少两个摄像头,还包括本发明上述实施例提供的一种图像处理的装置。可以通过上述实施例一和实施例二提供的图像处理的方法进行图像处理。The fourth embodiment of the present invention further provides an intelligent conference terminal, including: at least two cameras with parallel optical axes, and an apparatus for image processing provided by the foregoing embodiments of the present invention. The image processing can be performed by the image processing methods provided in the first embodiment and the second embodiment.
在本实施例中,所述智能会议终端属于具有视频通话功能的电子设备的一种,所述智能会议终端中集成有视频通话系统,同时还具备至少两个光轴平行的摄像头以及本发明上述实施例提供的图像处理的装置。In this embodiment, the smart conference terminal belongs to a type of electronic device having a video call function, wherein the smart conference terminal is integrated with a video call system, and at the same time, at least two cameras with parallel optical axes and the above The apparatus for image processing provided by the embodiment.
在所述智能会议终端中集成本发明上述实施例提供的图像处理的装置之后,在与其他具有视频通话功能的智能终端进行视频通话时,能够对实时捕获的当前实景图像帧中的局部图像进行图像参数信息的调节处理,有效提升了视频参与者在智能会议终端上的显示效果,同时也进一步提高了智能会议终端的用户体验。After the device for image processing provided by the foregoing embodiment of the present invention is integrated in the smart conference terminal, when a video call is made with other smart terminals having a video call function, the partial image in the current live view image frame captured in real time can be performed. The adjustment and processing of the image parameter information effectively improves the display effect of the video participant on the intelligent conference terminal, and further improves the user experience of the intelligent conference terminal.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,所述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,包括如下步骤:获取通过摄像头捕获的当前
实景图像帧,并确定所述当前实景图像帧中的目标聚焦图像;根据所述目标聚焦图像确定所述当前实景图像帧的景深远界限值;对所述景深远界限值对应的图像区域进行图像参数信息的调节处理。所述的存储介质,如:ROM/RAM、磁碟、光盘等。One of ordinary skill in the art can understand that all or part of the steps of implementing the above embodiments may be completed by a program to instruct related hardware, and the program may be stored in a computer readable storage medium, and the program is executed. The following steps are included: obtaining the current captured by the camera
Realizing an image frame, and determining a target focused image in the current real image frame; determining a depth of field limit value of the current live image frame according to the target focused image; and performing an image on the image region corresponding to the depth of field limit value Adjustment processing of parameter information. The storage medium is, for example, a ROM/RAM, a magnetic disk, an optical disk, or the like.
注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。
Note that the above are only the preferred embodiments of the present invention and the technical principles applied thereto. Those skilled in the art will appreciate that the present invention is not limited to the specific embodiments described herein, and that various modifications, changes and substitutions may be made without departing from the scope of the invention. Therefore, the present invention has been described in detail by the above embodiments, but the present invention is not limited to the above embodiments, and other equivalent embodiments may be included without departing from the inventive concept. The scope is determined by the scope of the appended claims.
Claims (10)
- 一种图像处理的方法,其特征在于,包括:A method of image processing, comprising:获取通过摄像头捕获的当前实景图像帧,并确定所述当前实景图像帧中的目标聚焦图像;Obtaining a current live image frame captured by the camera and determining a target focused image in the current live image frame;根据所述目标聚焦图像确定所述当前实景图像帧的景深远界限值;Determining a depth of field limit value of the current live image frame according to the target focused image;对所述景深远界限值对应的图像区域进行图像参数信息的调节处理。The image region information is subjected to adjustment processing on the image region corresponding to the depth of field limit value.
- 根据权利要求1所述的方法,其特征在于,所述获取通过摄像头捕获的当前实景图像帧,包括:The method according to claim 1, wherein the acquiring a current live image frame captured by the camera comprises:获取通过至少两个摄像头分别捕获的当前图像帧;Obtaining a current image frame captured by at least two cameras respectively;对分别捕获的至少两张当前图像帧进行图像合成处理,获得当前实景图像帧;Performing image synthesis processing on at least two current image frames respectively captured to obtain a current live image frame;其中,所述当前实景图像帧中各像素点具有相应的景深信息。Wherein, each pixel in the current live image frame has corresponding depth of field information.
- 根据权利要求2所述的方法,其特征在于,所述确定所述当前实景图像帧中的目标聚焦图像,包括:The method according to claim 2, wherein said determining a target focused image in said current live image frame comprises:根据人物图像特征确定所述当前实景图像帧中的被摄人物,并确定组成所述被摄人物的当前像素信息;Determining a subject in the current live image frame according to a character image feature, and determining current pixel information constituting the subject;在已获取的前一实景图像帧中确定是否存在所述被摄人物;Determining whether the subject is present in the acquired previous live image frame;如果存在所述被摄人物,则在所述前一实景图像帧中确定组成所述被摄人物的历史像素信息,并判定所述当前像素信息是否与所述历史像素信息匹配,若否,则确定所述被摄人物的位置发生变化,将所述被摄人物确定为目标聚焦图像;若是,则根据各被摄人物的当前像素信息确定平均像素信息,将所述平均像素信息对应的区域确定为目标聚焦图像;If the subject is present, determining historical pixel information constituting the subject in the previous live image frame, and determining whether the current pixel information matches the historical pixel information, and if not, Determining that the position of the subject changes, determining the subject as a target focused image; if yes, determining average pixel information according to current pixel information of each of the subjects, determining an area corresponding to the average pixel information Focus the image for the target;如果不存在所述被摄人物,则获取预设的聚焦像素信息,将所述聚焦像素 信息在所述当前实景图像帧中对应的区域确定为目标聚焦图像。If the subject is not present, acquiring preset focus pixel information, the focused pixel The corresponding area of the information in the current live image frame is determined as the target focused image.
- 根据权利要求3所述的方法,其特征在于,在所述根据人物图像特征确定所述当前实景图像帧的被摄人物之后,还包括:The method according to claim 3, further comprising: after determining the subject of the current live image frame according to the character image feature, further comprising:对所述被摄人物进行亮度提升处理。The subject is subjected to brightness enhancement processing.
- 根据权利要求2所述的方法,其特征在于,所述根据所述目标聚焦图像确定所述当前实景图像帧的景深远界限值,包括:The method according to claim 2, wherein the determining a depth of field limit value of the current live image frame according to the target focused image comprises:根据所述目标聚焦图像在所述当前实景图像帧中的当前像素信息,确定所述目标聚焦图像的平面坐标信息;Determining plane coordinate information of the target focused image according to current pixel information of the target focused image in the current live image frame;根据所述当前像素信息对应的景深信息,确定所述目标聚焦图像的深度值;Determining a depth value of the target focused image according to the depth information corresponding to the current pixel information;根据所述平面坐标信息及所述深度值,确定所述目标聚焦图像到摄像头的实际聚焦距离;Determining an actual focus distance of the target focused image to the camera according to the plane coordinate information and the depth value;根据所述实际聚焦距离及获取的摄像头属性参数,确定所述当前实景图像帧的景深远界限值。Determining a depth of field limit value of the current live image frame according to the actual focus distance and the acquired camera attribute parameter.
- 根据权利要求1所述的方法,其特征在于,所述对所述景深远界限值对应的图像区域进行图像参数信息的调节处理,包括:The method according to claim 1, wherein the performing image parameter information adjustment processing on the image region corresponding to the depth of field limit value includes:获取所述景深远界限值对应的图像区域的图像参数信息,所述图像参数信息包括:图像RGB占比、色彩对比度以及图像锐度;And acquiring image parameter information of the image region corresponding to the depth of field limit value, where the image parameter information includes: image RGB proportion, color contrast, and image sharpness;在所述图像参数信息不符合设定的标准参数信息时,控制调节所述图像区域的图像亮度、色彩对比度和/或图像锐度,以使所述图像参数信息符合所述标准参数信息。When the image parameter information does not conform to the set standard parameter information, controlling to adjust image brightness, color contrast, and/or image sharpness of the image area, so that the image parameter information conforms to the standard parameter information.
- 一种图像处理的装置,其特征在于,包括:An apparatus for image processing, comprising:实景图像获取模块,用于获取通过摄像头捕获的当前实景图像帧; a real image acquisition module, configured to acquire a current live image frame captured by the camera;聚焦图像确定模块,用于确定所述当前实景图像帧中的目标聚焦图像;a focused image determining module, configured to determine a target focused image in the current live image frame;景深界限确定模块,用于根据所述目标聚焦图像确定所述当前实景图像帧的景深远界限值;a depth of field limit determining module, configured to determine a depth of field limit value of the current live image frame according to the target focused image;图像参数调节模块,用于对所述景深远界限值对应的图像区域进行图像参数信息的调节处理。The image parameter adjustment module is configured to perform image parameter information adjustment processing on the image region corresponding to the depth of field limit value.
- 根据权利要求7所述的装置,其特征在于,所述实景图像获取模块,具体用于:The device according to claim 7, wherein the real-time image acquisition module is specifically configured to:获取通过上至少两个摄像头分别捕获的当前图像帧;Obtaining a current image frame captured by at least two cameras;对分别捕获的至少两张当前图像帧进行图像合成处理,获得当前实景图像帧;Performing image synthesis processing on at least two current image frames respectively captured to obtain a current live image frame;其中,所述当前实景图像帧中各像素点具有相应的景深信息。Wherein, each pixel in the current live image frame has corresponding depth of field information.
- 根据权利要求8所述的装置,其特征在于,所述聚焦图像确定模块,包括:The apparatus according to claim 8, wherein the focused image determining module comprises:被摄人物确定单元,用于根据人物图像特征确定所述当前实景图像帧中的被摄人物,并确定组成所述被摄人物的当前像素信息;a subject determining unit, configured to determine a subject in the current live image frame according to the character image feature, and determine current pixel information constituting the subject;信息判定单元,用于在已获取的前一实景图像帧中确定是否存在所述被摄人物;An information determining unit, configured to determine whether the subject is present in the acquired previous live image frame;第一执行单元,用于当存在所述被摄人物时,在所述前一实景图像帧中确定组成所述被摄人物的历史像素信息,并判定所述当前像素信息是否与所述历史像素信息匹配,若否,则确定所述被摄人物的位置发生变化,将所述被摄人物确定为目标聚焦图像;若是,则根据各被摄人物的当前像素信息确定平均像素信息,将所述平均像素信息对应的区域确定为目标聚焦图像; a first execution unit, configured to determine historical pixel information constituting the subject in the previous live image frame when the subject is present, and determine whether the current pixel information is related to the historical pixel Information matching, if not, determining that the position of the subject changes, determining the subject as a target focused image; if yes, determining average pixel information according to current pixel information of each subject, The area corresponding to the average pixel information is determined as the target focused image;第二执行单元,用于当不存在所述被摄人物时,获取预设的聚焦像素信息,将所述聚焦像素信息在所述当前实景图像帧中对应的区域确定为目标聚焦图像。a second execution unit, configured to acquire preset focus pixel information when the subject is not present, and determine a corresponding area of the focused pixel information in the current live image frame as a target focused image.
- 一种智能会议终端,包括:光轴平行的至少两个摄像头,其特征在于,还包括:如权利要求7-9任一项所述图像处理的装置。 An intelligent conference terminal comprising: at least two cameras parallel to an optical axis, characterized by further comprising: an apparatus for image processing according to any one of claims 7-9.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710160930.7 | 2017-03-17 | ||
CN201710160930.7A CN106803920B (en) | 2017-03-17 | 2017-03-17 | Image processing method and device and intelligent conference terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018166170A1 true WO2018166170A1 (en) | 2018-09-20 |
Family
ID=58988136
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/103282 WO2018166170A1 (en) | 2017-03-17 | 2017-09-25 | Image processing method and device, and intelligent conferencing terminal |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106803920B (en) |
WO (1) | WO2018166170A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112351197A (en) * | 2020-09-25 | 2021-02-09 | 南京酷派软件技术有限公司 | Shooting parameter adjusting method and device, storage medium and electronic equipment |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106803920B (en) * | 2017-03-17 | 2020-07-10 | 广州视源电子科技股份有限公司 | Image processing method and device and intelligent conference terminal |
CN111210471B (en) * | 2018-11-22 | 2023-08-25 | 浙江欣奕华智能科技有限公司 | Positioning method, device and system |
CN110545384B (en) * | 2019-09-23 | 2021-06-08 | Oppo广东移动通信有限公司 | Focusing method and device, electronic equipment and computer readable storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060204034A1 (en) * | 2003-06-26 | 2006-09-14 | Eran Steinberg | Modification of viewing parameters for digital images using face detection information |
CN103324004A (en) * | 2012-03-19 | 2013-09-25 | 联想(北京)有限公司 | Focusing method and image capturing device |
CN104982029A (en) * | 2012-12-20 | 2015-10-14 | 微软技术许可有限责任公司 | CAmera With Privacy Modes |
CN105611167A (en) * | 2015-12-30 | 2016-05-25 | 联想(北京)有限公司 | Focusing plane adjusting method and electronic device |
CN106803920A (en) * | 2017-03-17 | 2017-06-06 | 广州视源电子科技股份有限公司 | Image processing method and device and intelligent conference terminal |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7657171B2 (en) * | 2006-06-29 | 2010-02-02 | Scenera Technologies, Llc | Method and system for providing background blurring when capturing an image using an image capture device |
JP2009290660A (en) * | 2008-05-30 | 2009-12-10 | Seiko Epson Corp | Image processing apparatus, image processing method, image processing program and printer |
CN104184935B (en) * | 2013-05-27 | 2017-09-12 | 鸿富锦精密工业(深圳)有限公司 | Image capture devices and method |
US9282285B2 (en) * | 2013-06-10 | 2016-03-08 | Citrix Systems, Inc. | Providing user video having a virtual curtain to an online conference |
CN103945118B (en) * | 2014-03-14 | 2017-06-20 | 华为技术有限公司 | Image weakening method, device and electronic equipment |
CN105100615B (en) * | 2015-07-24 | 2019-02-26 | 青岛海信移动通信技术股份有限公司 | A kind of method for previewing of image, device and terminal |
CN105303543A (en) * | 2015-10-23 | 2016-02-03 | 努比亚技术有限公司 | Image enhancement method and mobile terminal |
CN106331510B (en) * | 2016-10-31 | 2019-10-15 | 维沃移动通信有限公司 | A kind of backlight photographic method and mobile terminal |
-
2017
- 2017-03-17 CN CN201710160930.7A patent/CN106803920B/en active Active
- 2017-09-25 WO PCT/CN2017/103282 patent/WO2018166170A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060204034A1 (en) * | 2003-06-26 | 2006-09-14 | Eran Steinberg | Modification of viewing parameters for digital images using face detection information |
CN103324004A (en) * | 2012-03-19 | 2013-09-25 | 联想(北京)有限公司 | Focusing method and image capturing device |
CN104982029A (en) * | 2012-12-20 | 2015-10-14 | 微软技术许可有限责任公司 | CAmera With Privacy Modes |
CN105611167A (en) * | 2015-12-30 | 2016-05-25 | 联想(北京)有限公司 | Focusing plane adjusting method and electronic device |
CN106803920A (en) * | 2017-03-17 | 2017-06-06 | 广州视源电子科技股份有限公司 | Image processing method and device and intelligent conference terminal |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112351197A (en) * | 2020-09-25 | 2021-02-09 | 南京酷派软件技术有限公司 | Shooting parameter adjusting method and device, storage medium and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN106803920A (en) | 2017-06-06 |
CN106803920B (en) | 2020-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5222939B2 (en) | Simulate shallow depth of field to maximize privacy in videophones | |
US11431915B2 (en) | Image acquisition method, electronic device, and non-transitory computer readable storage medium | |
US9961273B2 (en) | Mobile terminal and shooting method thereof | |
US8749607B2 (en) | Face equalization in video conferencing | |
WO2018166170A1 (en) | Image processing method and device, and intelligent conferencing terminal | |
CN111327824B (en) | Shooting parameter selection method and device, storage medium and electronic equipment | |
US10003765B2 (en) | System and method for brightening video image regions to compensate for backlighting | |
WO2014034556A1 (en) | Image processing apparatus and image display apparatus | |
TW201432616A (en) | Image capturing device and image processing method thereof | |
CN106981078B (en) | Sight line correction method and device, intelligent conference terminal and storage medium | |
WO2016110188A1 (en) | Method and electronic device for aesthetic enhancements of face in real-time video | |
TW201801516A (en) | Image capturing apparatus and photo composition method thereof | |
CN106254784A (en) | A kind of method and device of Video processing | |
US11871123B2 (en) | High dynamic range image synthesis method and electronic device | |
US20240296531A1 (en) | System and methods for depth-aware video processing and depth perception enhancement | |
KR20110109574A (en) | Image processing method and photographing apparatus using the same | |
CN111182208B (en) | Photographing method and device, storage medium and electronic equipment | |
CN109618088B (en) | Intelligent shooting system and method with illumination identification and reproduction functions | |
CN114979689A (en) | Multi-machine position live broadcast directing method, equipment and medium | |
TW201340704A (en) | Image capture device and image synthesis method thereof | |
WO2018196854A1 (en) | Photographing method, photographing apparatus and mobile terminal | |
WO2016123850A1 (en) | Photographing control method for terminal, and terminal | |
JP2014102614A (en) | Image processing device, imaging device, display device, image processing method, and image processing program | |
WO2016202073A1 (en) | Image processing method and apparatus | |
JP2018182700A (en) | Image processing apparatus, control method of the same, program, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17900818 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.01.2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17900818 Country of ref document: EP Kind code of ref document: A1 |