CN114245006B - Processing method, device and system - Google Patents
Processing method, device and system Download PDFInfo
- Publication number
- CN114245006B CN114245006B CN202111446247.2A CN202111446247A CN114245006B CN 114245006 B CN114245006 B CN 114245006B CN 202111446247 A CN202111446247 A CN 202111446247A CN 114245006 B CN114245006 B CN 114245006B
- Authority
- CN
- China
- Prior art keywords
- frame image
- image
- camera
- target
- cameras
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
After any spliced image obtained through at least two cameras, namely a first frame image is obtained, the method and the device continue to obtain a second frame image by adjusting the view finding range of the cameras, so that the first frame image is processed by the second frame image, abnormal areas such as unclear, incomplete, splicing gaps and the like existing in the first frame image can be compensated, a target frame image with the bearing data quantity larger than the data quantity borne by the first frame image before processing is obtained, the image processing effect is improved, and compared with the method for directly outputting the obtained spliced image as the target frame image, the method and the device can better meet the requirement of application on image display quality.
Description
Technical Field
The present disclosure relates generally to the field of image processing technologies, and in particular, to a processing method, apparatus, and system.
Background
In the information age, images are an important means for humans to acquire information, express information, and communicate information. In practical application, a proper image processing technology such as image stitching, image transformation, image segmentation, image enhancement, restoration and the like can be selected, and the image acquired by the camera is correspondingly processed to obtain a target image meeting application requirements.
Taking an image stitching technology as an example, a plurality of images with overlapped parts and collected from different visual angles can be stitched into a panoramic image or a high-resolution image, so that a user can learn richer information by watching the stitched images.
However, because the horizontal viewing angles of different cameras for collecting images with different viewing angles are limited, the problems of incomplete information, unclear information, gaps and the like of the spliced images are often caused, and the image processing effect is reduced.
Disclosure of Invention
In view of this, the present application provides a processing method comprising:
obtaining a first frame image, wherein the first frame image is a spliced image obtained through at least two cameras;
obtaining a second frame image, wherein the second frame image is an image obtained by adjusting the view finding range of the camera;
processing the first frame image by using the second frame image to obtain a target frame image;
the data amount carried by the target frame image is larger than the data amount carried by the first frame image.
Optionally, the obtaining the second frame image includes:
if the first frame image does not contain a first object, obtaining the second frame image, wherein the first object is located outside the view finding range of the at least two cameras; or alternatively, the first and second heat exchangers may be,
If a second object is detected to enter a target area in the process of obtaining the first frame image, obtaining the second frame image, wherein the target area is an area outside the view finding range of the at least two cameras; or alternatively, the first and second heat exchangers may be,
if a splicing gap exists in the first frame image, obtaining the second frame image; or alternatively, the first and second heat exchangers may be,
if the display parameters of the first area in the first frame image do not accord with the corresponding display conditions, obtaining the second frame image; or alternatively, the first and second heat exchangers may be,
and acquiring an acquisition trigger event, and acquiring the second frame image.
Optionally, the obtaining the second frame image includes:
obtaining a first positional relationship between the first object and a first target camera determined from the at least two cameras, and adjusting a view range of the first target camera based at least on the first positional relationship to obtain a second frame image including the first object; or alternatively, the first and second heat exchangers may be,
obtaining a second positional relationship between the first object and the at least two cameras, adjusting the view ranges of the at least two cameras based at least on the second positional relationship to obtain a second frame image comprising the first object; or alternatively, the first and second heat exchangers may be,
Obtaining a third positional relationship between the target area and the camera, adjusting a view range of at least one of the at least two cameras based at least on the third positional relationship, to obtain a second frame image including the second object; or alternatively, the first and second heat exchangers may be,
controlling the at least two cameras to rotate according to the same rotation parameters so as to obtain a second frame image comprising the area corresponding to the splicing gap; or alternatively, the first and second heat exchangers may be,
and controlling at least one camera corresponding to the first area to rotate so as to obtain a second frame image of the first area, wherein the display parameters of the second frame image accord with corresponding display conditions.
Optionally, the method further comprises:
and determining a time interval between the first frame image and the second frame image, and controlling a corresponding driving device to finish adjustment of the view finding range of the camera based on the time interval.
Optionally, the method further comprises:
after the second frame image is obtained, controlling a corresponding driving device to restore the rotated camera to an initial position so as to continuously obtain a corresponding target frame image, and outputting a video stream formed by a plurality of frames of the target frame image; or alternatively, the first and second heat exchangers may be,
after the second frame image is obtained, a third frame image is obtained according to the current view finding range of the camera, and a fourth frame image of the camera under a first view finding range or a second view finding range is obtained, so that target frame images under different view finding ranges are continuously obtained, the continuously obtained target frame images are processed to output corresponding video streams, wherein the first view finding range is a view finding range of the camera which is not adjusted, and the second view finding range is a view finding range of the camera which is adjusted at least twice.
Optionally, the processing the first frame image with the second frame image to obtain a target frame image includes:
determining a target image area corresponding to the abnormal area of the first frame image in the second frame image, and fusing the target image area to the first frame image to obtain a target frame image; the abnormal region of the first frame image comprises at least one region among a prediction region of the first object, the target region, a region where the splicing gap is located and a first region where the display parameter does not accord with the display condition; or alternatively, the first and second heat exchangers may be,
and carrying out complementary merging processing on the second frame image and the first frame image to obtain a target frame image.
Optionally, the method further comprises:
processing the second frame image by using the first frame image, and reversely moving the processed second frame image according to the rotation parameters of the camera for adjusting the view finding range to obtain a target frame image; the data amount carried by the target frame image is larger than the data amount carried by the second frame image.
The application also proposes a processing device, comprising:
The first obtaining module is used for obtaining a first frame image, wherein the first frame image is a spliced image obtained through at least two cameras;
the second obtaining module is used for obtaining a second frame image, wherein the second frame image is an image obtained by adjusting the view finding range of the camera;
the first processing module is used for processing the first frame image by utilizing the second frame image to obtain a target frame image;
the data amount carried by the target frame image is larger than the data amount carried by the first frame image.
The present application also proposes a processing system, the system comprising: an image acquisition device and an image processing device, wherein:
the image acquisition equipment comprises at least two cameras positioned at different positions, and the at least two cameras acquire images at the same time;
the image processing apparatus includes:
a first memory for storing a first program for implementing the processing method as described above;
and the first processor is used for loading and executing the first program stored in the first memory to realize the processing method.
Optionally, the image processing apparatus further includes:
at least one driving device for controlling the rotation of the corresponding camera to change the view finding range of the camera;
The second memory is used for storing a second program for realizing the splicing processing of the images acquired by the at least two cameras to obtain a spliced image process;
and the second processor is used for loading and executing the second program to realize the splicing processing of the images acquired by the at least two cameras so as to obtain spliced images.
Therefore, after any spliced image obtained through at least two cameras, namely a first frame image is obtained, the second frame image is continuously obtained by adjusting the view finding range of the cameras, so that the first frame image is processed by the second frame image, abnormal areas such as unclear, incomplete and splice gaps existing in the first frame image can be compensated, a target frame image with the bearing data amount larger than the data amount borne by the first frame image before processing is obtained, the image processing effect is improved, and compared with the method that the obtained spliced image is directly output as the target frame image, the requirement of application on image display quality can be better met.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present application, and that other drawings may be obtained according to the provided drawings without inventive effort to a person skilled in the art.
FIG. 1 is a schematic view of an image acquisition scene;
FIG. 2 is a schematic diagram of an alternative example of a processing system suitable for use in the processing methods presented herein;
FIG. 3 is a schematic diagram of a further alternative example of a processing system suitable for use in the processing methods presented herein;
FIG. 4 is a schematic diagram of a further alternative example of a processing system suitable for use in the processing methods presented herein;
FIG. 5 is a schematic diagram of a further alternative example of a processing system suitable for use in the processing methods presented herein;
FIG. 6 is a flow chart of an alternative example of a processing method presented herein;
FIG. 7 is a flow chart of yet another alternative example of a processing method presented herein;
FIG. 8 is a flow chart of yet another alternative example of a processing method presented herein;
FIG. 9 is a flow chart of yet another alternative example of a processing method presented herein;
fig. 10 is a schematic diagram of a processing procedure under a panoramic image blind area compensation scene in the processing method proposed in the present application;
FIG. 11 is a schematic diagram illustrating an alternative method of controlling camera rotation in the processing method of the present application;
fig. 12 is a schematic structural view of an alternative example of the processing device proposed in the present application.
Detailed Description
For the description of the background art, the application of the image stitching technology in various fields is still taken as an example, and referring to the image acquisition scene schematic diagram shown in fig. 1, because the horizontal shooting view angle of the cameras is limited and the horizontal shooting view angle of the camera configuration is often less than 180 °, as shown in the left drawing of fig. 1, a view blind area which cannot be shot exists between two adjacent cameras, if the object a is located in the view blind area, the finally stitched image does not contain the object a, that is, the stitched image is incomplete.
In order to eliminate the above blind view areas, the number of cameras is increased to reduce the distance between two adjacent cameras, as shown in the right drawing of fig. 1, so that the spliced image includes the object a, but this way not only increases the hardware cost and the requirement on the image processing capability, but also can not truly eliminate the blind view areas of multiple cameras, as the object B in fig. 1 can not still be shot, and also can not solve the problems of gaps and the like in the image splicing.
In order to improve the above problem, the present application proposes to obtain images after two frames of stitching before and after adjusting the view finding range of the camera, so that an abnormal image area (such as a view finding blind area, an unclear area, a gap area, etc.) exists in the first frame image, and an image area at a corresponding position in the second frame image is normal, so that the second frame image is utilized to process the first frame image (such as stitching replacement or superposition, suspension display, etc.), and then a target frame image meeting the application requirement can be obtained, that is, a frame image with a bearing data volume greater than that of the first frame image is obtained. Compared with a processing mode of taking spliced images obtained by at least two cameras as target frame images, the processing mode of the method and the device avoids the technical problems that the obtained target frame images are incomplete, unclear, gaps exist and the like, improves the image processing effect, and therefore meets application requirements better.
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Referring to fig. 2, a schematic structural diagram of an alternative example of a processing system suitable for the processing method proposed herein may include, but is not limited to, an image acquisition device 210 and an image processing device 220, wherein:
the image capturing device 210 may include at least two cameras 211 located at different positions, where in the image capturing process, the at least two cameras may perform image capturing at the same time, and splice the captured images at the same time to obtain a spliced image, which is not described in detail in the image splicing process under different capturing view angles. And the present application does not limit the type and number of cameras 211 constituting the image pickup apparatus 210, as the case may be.
It should be noted that, the installation position and the shooting view angle (i.e. the view finding range) of each camera included in the image capturing apparatus are not limited in this application, and may be determined according to various factors such as the shooting environment, the shooting object, and performance parameters of each camera, so as to ensure that the shooting object is located in the view finding range of at least one camera in the image capturing apparatus as much as possible.
The image processing apparatus 220 may include a first memory 221 and a first processor 222, the first memory 221 may be used to store a first program implementing the processing method proposed herein; the first processor 222 may load and execute the first program stored in the first memory 221, so as to implement the processing method according to the embodiment of the present application, and the implementation process may refer to, but is not limited to, the following description of the corresponding part of the method embodiment.
In some embodiments, in order to implement a stitching process on the images acquired by each camera 211, as shown in fig. 3, the image processing apparatus 220 may further include a second memory 223 and a second processor 224, where the second memory 223 may store a second program for implementing a stitching process for stitching images acquired by a plurality of cameras 211 at the same time to obtain a stitched image of a corresponding frame, and the second processor 224 may load and execute the second program stored in the second memory 223 to implement a stitching process for obtaining the stitched image, and the stitching process may be implemented according to an image stitching technology, and the stitching process is not described in detail in this application.
In embodiments of the present application, the memory may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device or other volatile solid-state storage device. A processor, which may be a central processing unit (Central Processing Unit, CPU), application-specific integrated circuit (ASIC), digital Signal Processor (DSP), application-specific integrated circuit (ASIC), off-the-shelf programmable gate array (FPGA), or other programmable logic device, etc. The device types of the first memory, the first processor, the second memory and the second processor are not limited, and can be determined according to the functional requirements of products.
In combination with the processing method provided by the application, in order to obtain the target frame image meeting the application requirement, the view range of the camera needs to be adjusted, that is, in the whole image acquisition and processing process, at least one camera needs to be controlled to rotate so as to change the view range, thereby changing the content contained in a plurality of images acquired by the image acquisition device, that is, changing the content of a spliced image formed by splicing a plurality of images acquired by the image acquisition device, so as to ensure that compensation processing is performed on adjacent frame spliced images to obtain the required target frame image.
In order to adjust the view range of the camera, as shown in fig. 3, the image processing apparatus 220 may further include a driving device 225, where the driving device 225 may be connected to the camera 211 in the image capturing apparatus 210, so as to control the adjustment of the view range of the camera 211, and the control implementation process is not limited in this application, and the circuit structure of the driving device 225 may be different for different control implementations, and this application does not list the constituent structure of the driving device 225 one by one.
In some embodiments, as shown in fig. 4, the driving device 225 may include, but is not limited to, a driving controller 2251 and a power component 2252 connected to the driving controller 2251, where the power component 2252 may include a motor, etc., and in the practical application of the present application, the power component 2252 may be connected to each camera 211 in the image capturing apparatus 210 or a supporting component that is used to support the camera 211 and can freely rotate, so that, when it is required to adjust the viewing range of the camera 211, the driving controller 2251 may control the power component to operate to control the supporting component to rotate, thereby changing the viewing range of the camera mounted on the supporting component; of course, in the case of a camera having a rotating lens, the present application may also directly control the rotation of the lens of the camera, thereby changing the view range of the camera, etc., and the present application does not limit how to implement the adjustment of the view range of the camera 211 by using the driving device 225, as the case may be.
In order to obtain the second frame image different from the previous frame stitched image (i.e., the first frame image), in the process of adjusting the view range of the camera 211 in the image capturing device 210, the view range of at least one camera may be adjusted, that is, the second frame image is obtained by stitching the images acquired after the adjustment of the view range of at least one camera, and one or more cameras that need to adjust the view range may be determined according to the scene requirement, which is not described in detail herein in this embodiment of the present application.
In addition, after the image capturing device 210 is installed in the current shooting scene, the installation position of each camera 211 in the image capturing device 210 is usually fixed, and shooting parameters of each camera 211, such as a shooting view angle, a depth of field, a resolution, brightness, and the like, may be determined according to the shooting parameters of each camera and information such as a relative positional relationship between each camera in an image processing process, so as to determine whether an obtained spliced image has an abnormal area (i.e., an area that does not meet a display condition, such as an unclear problem, a blurring problem, or the like caused by the area not being in a focal area, or a splicing slit, a framing blind area, and the like), so as to timely process any frame image having the abnormal area, thereby ensuring that the obtained target frame image meets application requirements.
In some embodiments of the present application, the image capturing device 210 and the image processing device 220 may be different independent devices, or may be integrated on the same device, and may be determined according to the scene requirement; in still other embodiments, the first memory 221 and the first processor 222 in the image processing device 220 may be disposed on the first device, the second memory 223 and the second processor 224 may be disposed on the first device or a second device different from the first device or integrated in the image capturing device 210, the driving apparatus 225 may be a separate third device, and disposed on a shooting site where the image capturing device 210 is located, where the relationship of disposition devices between the components included in the image processing device 220 is not limited, and the application may be applicable.
For example, referring to fig. 5, each camera 211, the second memory 223, and the second processor 224 in the image capturing device 210 may be integrated into one electronic device (denoted as a first electronic device) for implementing image capturing and stitching processes; the rotating means 225 may be used as a stand alone device or integrated in the first electronic device for adjusting the viewing range of the camera 211; the independent electronic device where the first memory 221 and the first processor 222 are located may be a second electronic device, and the second electronic device may further include a communication interface for connecting to the first electronic device, receive the stitched image sent by the first electronic device, execute a first procedure by the first processor 222 to process the stitched image, and output the target frame image through a display of the second electronic device or send to another display device, but is not limited to this process.
For the above-mentioned camera view-finding range, the driving device 225 may perform control adjustment according to a preset rule, or the second electronic device may send a corresponding driving instruction to the driving device 225 to control the driving device 225 to implement adjustment of the camera view-finding range, which is not limited in the implementation process and may be determined according to circumstances.
It can be appreciated that the first device, the second device, the first electronic device, and the second electronic device may include, but are not limited to, a smart phone, a tablet computer, a wearable device, an augmented reality (Augmented Reality, AR) device, a virtual reality (VirtualReality, VR) device, a vehicle-mounted device, a smart home device, a smart medical device, a robot, a desktop computer, and the like, and may be flexibly selected according to a scene requirement.
Furthermore, the configuration of the processing system shown in the above-described system drawings is not limiting on the processing system in the embodiments of the present application, and in practical applications, the processing system may include more or less components than those shown in the above-described drawings, or may combine some components. And one or more electronic devices constituting the processing system are not limited to the constituent elements shown in the accompanying drawings, and may include other output components such as speakers, vibration mechanisms, lamps, etc., as required; other input components such as a keyboard, mouse, microphone, etc.; and various sensor modules, antenna modules, mobile communication modules, etc., which are not specifically recited herein.
Referring to fig. 6, a flowchart of an alternative example of a processing method provided in the present application may be implemented by a computer device, where the computer device may include a terminal and/or a server, where the terminal may include, but is not limited to, an electronic device described above, and the server may be an independent physical server, may also be a service cluster formed by integrating multiple physical servers, may also be a cloud server with cloud computing capability, and the server may perform data interaction with the terminal through a wired network or a wireless network, so as to meet application requirements. As shown in fig. 6, the method may include:
step S61, obtaining a first frame image, wherein the first frame image is a spliced image obtained through at least two cameras;
in combination with the above description on the technical scheme of the application, because the shooting view angle of one camera is limited, in order to acquire images under different view angles, at least two cameras can be configured for image acquisition aiming at shooting objects, and then the images acquired by the cameras at the same time are spliced by utilizing an image splicing technology to obtain corresponding spliced images. The spliced image acquired at any moment can be recorded as a first frame image.
It should be noted that, for the at least two cameras, the at least two cameras may be integrated in a computer device, or may be an independent image acquisition device different from the computer device, so that the computer device may execute an image stitching program after receiving images acquired by the at least two cameras, to obtain a first frame image; or after the image acquisition equipment completes image stitching, the obtained stitched image is sent to computer equipment and the like.
It can be understood that the spliced image can be a panoramic image obtained by splicing a plurality of images under a 360-degree viewing angle, or can be an HDR (High-Dynamic Range) image obtained by splicing images of other viewing angles.
Step S62, obtaining a second frame image, wherein the second frame image is an image obtained by adjusting the view finding range of the camera;
for the obtained first frame image, there may be an abnormal area caused by various factors, and it is not possible to meet the application requirement, in order to correct the abnormal area, it is proposed to adjust the view range of at least one camera in the current shooting environment (that is, adjust the view ranges of all cameras or part of the view ranges of the cameras in the current shooting environment, etc.), and obtain the image of each camera after adjusting the view range, so as to implement the compensation processing on the abnormal area of the first frame image. Therefore, the second frame image acquired in the embodiment of the present application may include an image acquired by at least one camera in the current shooting environment after adjusting the view finding range of the at least one camera, for example, an image acquired by one camera after adjusting the view finding range, or a plurality of images acquired by a plurality of cameras or a spliced image, etc., which are not limited in the acquiring manner and content of the second frame image.
It should be noted that, when the acquired first frame image does not meet the application requirement, the acquired second frame image at least includes an image area meeting the application requirement, and the determining manner of the content of the application requirement and the corresponding image area is not limited, so that the application is optional.
For example, if the obtained first frame image has a stitching gap or an abnormal area such as a blind area where an object that is not shot is located is generated by the stitched image caused by a blind area of a view between adjacent cameras, the application may adjust the view ranges of all cameras or adjust the view ranges of some cameras in the current shooting environment synchronously for the abnormal area, so that the environment area corresponding to the abnormal area in the first frame image is at least located in the adjusted view range of one camera, and thus the second frame image obtained based on the image acquired by the adjusted camera may include a normal image area corresponding to the abnormal area in the first frame image.
Step S63, processing the first frame image by using the second frame image to obtain a target frame image; the target frame image carries a greater amount of data than the first frame image.
As described above, the embodiment of the present application may perform compensation processing on the abnormal region in the first frame image by using the second frame image, for example, replace the abnormal region by the normal image region corresponding to the abnormal region in the first frame image in the second frame image, or superimpose the normal image region on the abnormal region in the first frame image, or splice the normal image region to the abnormal region in the first frame image, so as to eliminate the abnormal region in the processed first frame image (i.e. the target frame image). Therefore, the number of the target frame images obtained by the embodiment of the application is larger than the data amount carried by the first frame image before processing, for example, compared with the first frame image obtained directly, the content contained in the target frame image obtained by the application is more complete and clear, and no splicing gap exists, so that the image processing effect is improved, the image quality requirement of video interaction scenes such as live broadcasting or conferences on the output video stream can be better met, and the user experience is improved.
Referring to fig. 7, which is a schematic flow chart of yet another alternative example of the processing method proposed in the present application, the present embodiment may be a description of an alternative refinement implementation procedure of the processing method described in the foregoing embodiment, but is not limited to the refinement implementation method described in the foregoing embodiment, and the method may still be executed by the foregoing computer device, as shown in fig. 7, where the processing method proposed in the present embodiment may include:
Step S71, obtaining a first frame image, wherein the first frame image is a spliced image obtained through at least two cameras;
step S72, detecting whether the first frame of image meets shooting requirements, if so, entering step S73; if not, go to step S74;
step S73, determining the first frame image as a target frame image;
step S74, obtaining a second frame image, wherein the second frame image is an image obtained by adjusting the view finding range of the camera;
in this embodiment of the present application, the shooting requirement may be a requirement on at least one aspect of an object, image definition, image integrity, whether there is a stitching gap, etc. included in each obtained frame image, and may be determined according to an application scenario requirement, which is not limited in this application. It will be appreciated that the shooting requirement is generally predetermined, so that after the first frame image is obtained, it can be detected whether the first frame image meets a preset shooting requirement, if so, it can be output as a target frame image, and if not, compensation processing needs to be performed on the first frame image.
Based on the above analysis, in some embodiments, for a blind view exists between at least two cameras deployed in a current shooting environment, that is, as shown in fig. 2, there is a first object to be shot outside the scope of the at least two cameras, which will cause the obtained first frame image to not include the first object, in order to ensure that the output target frame image includes the first object, the present application may obtain a second frame image including the first object, thereby implementing processing of the first frame image, adding an image area of the first object to the first frame image, for example, splicing, overlapping or suspending an image area corresponding to the first object in the second frame image to a corresponding position in the first frame image, to obtain a target frame image including the first object and each object included in the first frame image.
It can be seen that, in the present embodiment, the second frame image may be obtained again in the case where it is detected that the first object is not included in the first frame image. The first object may refer to any one or more or a class of objects located outside the view finding range (i.e., view finding blind areas) of each camera in the current shooting environment, and since the view finding blind areas between different cameras may be predetermined according to the method described in the corresponding part above, then the first object located in the view finding blind area may be determined according to the application requirement, so the shooting requirement in this embodiment may include the first object for each obtained frame image. It will be appreciated that the determined category of the first object may be different for different application requirements and/or shooting environments, and the present application does not limit the content of the first object.
In the application of the above embodiment, the above first object may be specified in advance, particularly in the processing application of the first frame image; of course, in the process of continuously adjusting the view finding range of the camera to shoot, as all shooting objects in the current shooting scene can be determined by combining different frame images, the application can also specify the first object to be shot currently according to the current object, and can also continuously adjust the type and the number of the specified first object according to the change of shooting requirements, and the implementation process is not described in detail herein.
In still other embodiments, since the first frame image is a stitched image, the present application may further detect whether a stitching gap exists in the first frame image, if a stitching gap exists, determine an area where the stitching gap exists as an abnormal area, and in order to eliminate the abnormal area, obtain the second frame image according to the method described above, that is, an image including a normal image area corresponding to the shooting environment in the abnormal area. It should be noted that the method for detecting whether a splice gap exists in a frame of image is not limited, for example, information change of adjacent pixels is compared and determined.
In still other embodiments, the present application may further implement detection of whether the display parameter of the first area in the first frame image meets the corresponding display condition by detecting whether the display parameter of the first area in the first frame image meets the corresponding display condition, that is, the above-mentioned detection requirement may further include whether the display parameter of the first area in the first frame image meets the corresponding display condition, the display parameters required for different content areas may be different, the display conditions corresponding to the different display parameters may also be different, in practical application, respective shooting requirements of different shooting objects, such as at least one display condition of the display parameters of the shooting images corresponding to different shooting objects, may be determined for the obtained first frame image, and then, whether the detected display parameter meets the corresponding display condition or not may be determined.
Optionally, the display condition may include that the display definition of the image reaches a preset definition, the image is not blurred, and the display parameters of the corresponding image may include parameters representing the definition of the image, blurring parameters, and the like. The first area of the first frame image may be any area in the first frame image, or an area where a shooting object is specified, or a splicing area between images acquired by different cameras, or a specific area determined based on information such as a positional relationship between different cameras and shooting parameters, etc., and the position of the first area in the first frame image and the content contained in the first area are not limited.
In still other embodiments of the present application, it may further be determined whether the obtained first frame image satisfies the shooting condition by detecting whether the second object enters the target area during the process of obtaining the first frame image, and it is apparent that the detecting step of step S72 is not limited to the step S71. In this further embodiment, the second object may be any object in the shooting scene, the target area may be an area outside the view range of each camera shot in the shooting scene, that is, a view blind area between each camera, and in combination with the above analysis, the target area may be information such as a positional relationship between each camera in the current shooting scene, shooting parameters (such as a shooting viewing angle, a depth of field, and the like) of each camera, and position information of the target area is predetermined.
Then, in the process of obtaining the first frame image, due to various factors such as that the shooting object in the current shooting scene may move and/or the cameras for shooting the image may rotate, the shooting angles of the cameras are limited (typically less than 180 °), the second object to be shot may be in the target area, so that the obtained first frame image does not contain the second object or does not contain the complete second object, in this case, the second frame image containing the second object may be obtained by adjusting the view range of the cameras.
In still other embodiments of the present application, for information such as a positional relationship between different cameras and shooting parameters of each camera, whether a viewfinder blind area is generated between the cameras may be calculated in advance according to the information, and in addition, other abnormal areas may be generated as described above for the acquired stitched image, so, in order to ensure that the output target frame image meets the shooting requirement, after acquiring one frame image, an acquisition trigger event of an adjacent next frame image may be configured in advance, and thus, after acquiring the first frame image, if a preset acquisition trigger event is acquired, a second frame image may be directly acquired.
The acquisition triggering event may include, but is not limited to, a triggering event for performing rotation control on the at least two cameras, particularly, when the cameras are in a shooting scene of real-time rotation, after a first frame image is obtained, timing may be started, and whether the timing time reaches the acquisition time of a next frame image is determined, for example, when it is determined that a second acquisition time is reached, a second frame image may be obtained; or, the rotation angle or shooting direction of the camera may be detected, for example, after the first frame image is obtained, whether the rotation angle of the camera reaches a preset angle, or whether the shooting direction of the camera reaches a preset shooting direction, etc., if so, a second frame image may be obtained, etc., where the content included in the acquisition triggering event is not limited, it may be understood that, for acquisition triggering events with different contents, the implementation manner of detecting the acquisition triggering event may be different, including but not limited to several implementation methods listed above.
It should be noted that, regarding the implementation method for detecting whether the first frame image meets the shooting requirement in the present application, the implementation method may include, but is not limited to, at least one implementation method listed above, in practical application, one or more implementation method combinations may be selected according to the actual shooting requirement, the corresponding shooting requirement is determined, and whether the obtained first frame image in the shooting scene meets the shooting requirement is detected. Therefore, if the shooting requirements include multiple modes, the above meeting the shooting requirements may mean that the above multiple detection results all meet the corresponding requirements, and failing to meet the shooting requirements may mean that at least one detection result does not meet the corresponding requirements, which is not described in detail herein in a combination of examples.
Step S75, processing the first frame image by using the second frame image to obtain a target frame image; the target frame image carries a greater amount of data than the first frame image.
In the following, when the first frame image is obtained and/or after the first frame image is obtained, it is determined that the first frame image meets the shooting requirement, for example, the first frame image includes the first object, and/or the second object does not enter the target area, and/or the first frame image does not have a stitching gap, and/or the display parameter of the first area in the first frame image meets the corresponding display condition, and/or the acquisition triggering event is not obtained, the implementation process may be determined according to the content of the shooting requirement, including but not limited to several implementation methods listed in the application, and the obtained first frame image may be considered to meet the application requirement without further compensation processing, and may be directly output as the target frame image.
On the contrary, if the above-described at least one implementation method is not satisfied, that is, the first frame image does not satisfy the shooting requirement, it is indicated that the first frame image does not satisfy the application requirement, and compensation processing needs to be performed on the first frame image before the first frame image can be output.
Based on the analysis, the implementation method of the step S75 may include: determining a target image area corresponding to the abnormal area of the first frame image in the second frame image, and fusing the target image area to the first frame image to obtain a target frame image; the abnormal region of the first frame image comprises at least one region among a predicted region of the first object, a target region, a region where the splicing gap is located and a first region where the display parameters do not meet the display conditions. In another possible implementation manner, the complementary merging process may also be directly performed on the second frame image and the first frame image to obtain the target frame image, which is not limited in the process implementation method of step S75, and may be determined according to circumstances.
Referring to fig. 8, a flowchart of still another alternative example of the processing method proposed in the present application may be a description of an alternative refinement implementation procedure of the processing method described in the foregoing embodiment, the embodiment of the present application may refine the above-mentioned second frame image obtaining procedure, and with respect to the processing steps before obtaining the second frame image, reference may be made to, but not limited to, the description of the corresponding parts of the foregoing embodiment, which is not repeated herein. As shown in fig. 8, the method may include:
Step S81, obtaining a first frame image, wherein the first frame image is a spliced image obtained through at least two cameras;
step S82, if the first frame image does not contain a first object, a first position relation between the first object and a first target camera determined from the at least two cameras is obtained;
in combination with the above description about whether the first frame image meets the shooting requirement, the embodiment of the present application takes the shooting requirement as an example that the first frame image includes a first object, that is, a shooting object located outside the view finding ranges of the at least two cameras, and if it is determined that the first frame image does not include the first object, it is described that there is a view finding blind area in the obtained first frame image, that is, there is a shooting object that is not shot by any camera and needs to be shot, and in order to obtain an image of the first object, the present application proposes a manner of adjusting the view finding ranges of the cameras.
In combination with the description of the corresponding parts of the foregoing embodiments, at least one camera of all cameras installed in a current shooting scene may be selected as a first target camera, that is, a camera with a view finding range that needs to be adjusted by rotating a lens.
For example, in combination with the shooting scene schematic diagram shown in fig. 1, if the first object is the object a in the drawing, one or more cameras with the closest view range and the position of the object a may be selected as the first target camera (e.g. the camera 2, etc.), so as to subsequently adjust the view range of the first target camera, so that the target pair a enters the view range of the first target camera. In a further possible implementation manner, all cameras in the current shooting scene may also be determined as the first target camera, so that by adjusting the view ranges of all cameras, it is also ensured that the obtained second frame image contains the first object. It should be noted that, when the first target camera is all cameras in the current shooting scene, the rotation angles when adjusting the view ranges of the cameras may be the same or different, depending on the situation.
Step S83, adjusting the view range of the first target camera based on at least the first positional relationship to obtain a second frame image including the first object;
after determining that the first target camera is needed, the driving device in the processing system can control the first target camera to rotate at least based on the corresponding first position relation, and change the view finding range of the first target camera to ensure that the obtained second frame image contains the first object.
In one possible implementation manner, according to the analysis, the rotation parameters of the corresponding first target cameras can be determined at least based on the first position relationship, then the driving device controls the corresponding first target cameras to rotate according to the rotation parameters, the images collected by the rotated first target cameras or the images collected by the rotated first target cameras and other non-rotated cameras are obtained, a second frame image containing the first object is obtained, and the like.
Step S84, processing the first frame image by using the second frame image to obtain a target frame image; the data amount carried by the target frame image is larger than the data amount carried by the first frame image;
the implementation method of step S84 may be different for the second frame image of different content, which is not limited in this application. Optionally, if the second frame image is a stitched image obtained by stitching images acquired by the camera after the viewfinder parameter adjustment, an image area including the first object in the second frame image may be determined, and then the determined image area of the first object is stitched or overlapped or suspended to a corresponding position in the first frame image, so that the obtained target frame image includes the content of the first frame image and the first object.
Step S85, controlling a corresponding driving device to restore the first target camera to an initial position so as to continuously obtain a corresponding target frame image;
step S86, outputting a video stream formed by the multi-frame target frame image.
In this embodiment of the present invention, if the obtained stitched image is a panoramic image, in the process of obtaining the second frame image, the image collected after the driving device rotates the first target camera is obtained, so as to reduce the rotation amplitude of the camera, especially in the case that the rotation amplitude of the camera is limited, after obtaining the second frame image, the corresponding driving device may be controlled to restore the rotated camera (i.e. the first target camera) to the initial position, i.e. the position where the camera is located when obtaining the first frame image.
The implementation method for adjusting the view range of the driving device how to control the rotation of the camera is not limited in this application, but reference may be made to, but not limited to, the description of the driving device related to the above system embodiment. In practical application, it can be understood that a driving device can be configured for each camera, so as to realize rotation control of the camera, namely adjustment control of a view finding range; of course, to the driving devices with different structures or the implementation manner of the rotation control of the cameras, the application can also adopt one driving device to realize synchronous rotation control of a plurality of cameras, and the like, and the implementation process is not described in detail in the application.
It should be understood that after the first target camera with the adjusted view finding range is restored to the initial position, the acquired next frame of spliced image can be continuously used as a new first frame of image, corresponding target frame images can be obtained according to the method described above, so that the pushing processing can output a video stream formed by continuously acquired multi-frame target frame images, any frame of image content in the output video stream is ensured to meet the shooting requirement, and the technical problems that currently, the spliced image is directly output, the problems of unclear, incomplete, splicing gaps, blurring and the like possibly existing in the output image are solved, and the image processing effect is poor are caused.
In still other embodiments of the present application, in combination with the above description about the shooting requirement, the implementation method for obtaining the second frame image described in the above embodiment may further include: a second positional relationship between the first object and at least two cameras (e.g., all cameras installed in the current shooting environment) is obtained, and then, based at least on the second positional relationship, a view range of the at least two cameras may be adjusted to obtain a second frame image including the first object.
Based on this, in still other embodiments, if a mode of synchronously controlling rotation of all cameras installed in the current shooting environment is adopted, respective view ranges of all cameras are adjusted to obtain a second frame image, after a first object which is not shot in a first frame image is known, rotation parameters such as a rotation direction, a rotation angle and the like of each camera can be determined according to a second position relationship such as a distance between the first object and the view range of each camera, the rotation angle and the like, so that a driving device controls rotation of a corresponding camera accordingly, the view range of the corresponding camera is changed, the first object is located in the view range of a certain camera, and thus, the acquired images of each camera after adjusting the view range are spliced, and the obtained second frame image contains the first object.
It should be noted that, in the adjustment implementation process of the view finding ranges of the cameras in the current shooting environment, the adjustment implementation process is not limited to the above-described adjustment implementation process according to the same selection parameter, and all the cameras are controlled to synchronously rotate, under certain scenes, the view finding ranges of the corresponding cameras can be respectively adjusted according to the scene control requirements and the position relationship between different cameras and the first object, that is, the adjustment of the view finding direction/angle, the shooting parameter and the like of the corresponding cameras can be respectively implemented according to different rotation parameters or incompletely consistent rotation parameters, the rotation control requirements of the cameras can be flexibly met, and the implementation process is not limited.
In addition, in combination with the above analysis, in still other embodiments of the present application, if the above shooting requirement may further include that any frame of image does not have any stitching gap, before the second frame of image is obtained, the present application may also directly control at least two configured cameras (i.e. all cameras installed in the current shooting environment) to rotate according to the same rotation parameter (which may include, but is not limited to, parameters such as a rotation direction, an angle, a time, etc.), that is, to implement synchronous rotation control on all cameras, so as to obtain the second frame of image including an area corresponding to the stitching gap in the first frame of image. Optionally, the rotation parameter may be determined at least based on information such as a positional relationship between cameras and a shooting parameter, which is not limited in the present application.
In still other embodiments, if the shooting requirement may further include a non-target area (i.e., an area outside the view range of all cameras, i.e., a view blind area) of the second object, the present application may further obtain a third positional relationship between the target area and the cameras, so as to adjust the view range of at least one of the at least two cameras based on at least the third positional relationship, so as to obtain a second frame image including the second object. Therefore, according to the embodiment of the application, the rotation parameter of at least one camera can be determined according to the pre-calculated view finding blind area between the cameras, the distance between the view finding blind areas of the cameras and the angle and the like, so that the at least one camera is guaranteed to rotate according to the rotation parameter, the view finding range of the at least one camera is changed, the pre-calculated view finding blind area can be enabled to enter the view finding range of the at least one camera, and accordingly the obtained second frame image comprises a second object entering the target area (which is the view finding blind area of the pointer to the first frame image and is not the view finding blind area of the second frame image).
It may be appreciated that in the foregoing still other embodiments, in implementing the camera view range adjustment process based at least on the third position information, the adjusted camera may be one or more cameras in the current shooting environment, and the determination manner of the adjusted camera is similar to the determination manner of the first target camera described above, which is not described in detail herein. And under the condition that all cameras need to be adjusted, all cameras can be controlled to synchronously rotate, and rotation control of the corresponding cameras can be realized at least based on the corresponding third position relation of each camera according to the mode described above, so that the view finding range of the cameras is adjusted, and the realization process is not described in detail in the application.
In still other embodiments of the present application, if the shooting requirement further includes that the display parameter meets the corresponding display condition, when the second frame image is obtained, the driving device may also control the rotation of at least one camera corresponding to the first region whose display parameter does not meet the corresponding display condition, so as to obtain the second frame image including the first region whose display parameter meets the corresponding display condition. The implementation method for controlling the rotation of the at least one camera is similar to the implementation method for controlling the rotation of the at least one camera according to the other embodiments described above, and is not described in detail in this application. It should be noted that, when controlling rotation of all cameras, all cameras may realize synchronous rotation based on the same rotation parameter, or may perform rotation control based on respective rotation parameters, which may be determined according to circumstances.
In still other embodiments, if the first frame image does not meet the shooting requirement, other cameras (denoted as second cameras) may be additionally configured to acquire an image area of the first frame image that does not meet the shooting requirement or an image area of the object, for example, the direction of view of the second camera is controlled to face the area or the object of the first frame image that does not meet the shooting requirement, so that the area or the object of the first frame image that does not meet the shooting requirement is located in the view range of the second camera, and thus the image area of the area or the object of the first frame image that does not meet the shooting requirement is acquired. Exemplary, the view finding range of the second camera is controlled to cover the area outside the view finding ranges of the at least two cameras, and/or the area corresponding to the splicing gap in the first frame image, and/or the first area where the display parameters do not meet the corresponding display conditions, etc.
In addition, for the implementation method for acquiring the second frame image, in the process of adjusting the view range of the camera, the implementation can be also realized according to the environmental information of the current shooting environment, for example, various determining or uncertain factors such as the position change of the shooting object in the shooting environment, the change of the illumination condition, the position change of the shielding object and the like are considered, the rotation parameters of the camera are determined, and then the driving device controls the camera to switch according to the rotation parameters so as to change the view range of the camera, so that the second frame image including the area or the object which does not meet the shooting requirement in the first frame image is ensured to be acquired. The rotation parameters of different cameras can be the same or different, and the content of the rotation parameters and the obtaining method thereof are not limited according to the situation.
Referring to fig. 9, a flowchart of a further alternative example of the processing method proposed in the present application is different from the processing method for obtaining the target frame image, which is provided in the embodiment of the present application and is based on the adjacent first frame image and second frame image, and a further implementation method for obtaining the target frame image, where the method may still be executed by a computer device, as shown in fig. 9, and the method may include:
step S91, obtaining a first frame image, wherein the first frame image is a spliced image obtained through at least two cameras;
step S92, after adjusting the view finding range of at least one camera, obtaining a second frame image obtained by splicing the images acquired by the at least two cameras;
regarding the implementation method of step S91 and step S92, reference may be made to, but not limited to, the description of the corresponding parts of the above embodiments, which are not repeated herein.
For example, the application scenario of obtaining a panoramic image in a current shooting environment is described by taking an example, because the view finding range of each camera is limited, there may be a view finding blind area between adjacent cameras, and an object located in the view finding blind area cannot be shot, so after images collected by all cameras in the current shooting environment are spliced in sequence, a blind area (i.e., an environment area corresponding to the view finding blind area, i.e., an area where an object located in the view finding blind area is expected to be presented) may exist in the obtained spliced image. As shown in the expanded schematic view of the first frame image in the first row of fig. 10, the gray area is an area corresponding to the view finding blind area, and the area can be calculated based on at least the positional relationship between the cameras and the information such as the shooting parameters of the cameras.
In order to obtain a panoramic image without a blind area, the application proposes that when the next frame of images is acquired, the view finding range of the camera is changed by rotating at least one camera, so that the blind area in the first frame of images is located in the view finding range of at least one camera. According to the embodiment of the application, all cameras are controlled to synchronously rotate by an angle, so that a view finding blind area during the last frame of image acquisition is located in the view finding range of one camera, and a second frame of image shown in a second row of drawing in fig. 10 is obtained. It can be seen that although the second frame image still has a blind area, the corresponding area image of the blind area in the first frame image meets the shooting requirement, so that the blind area compensation can be performed on the second frame image by the corresponding area image in the first frame image.
Step S93, the second frame image is processed by the first frame image, and the processed second frame image is reversely moved according to the rotation parameters of the camera for adjusting the view finding range, so as to obtain the target frame image.
Still taking the panoramic image processing flow shown in fig. 10 as an example, regarding the blind area in the second frame image, that is, the area not meeting the shooting requirement, in combination with the above description about the shooting requirement, the gray area shown in fig. 10 may also be the first object, the second object, the first area where the display parameters do not meet the corresponding display conditions, or the splice gap area, etc., which is described only by taking the scene of the blind area generated by not shooting some objects as an example, and the processing procedures for other cases are similar, and are not described in detail by one example.
In the blind area compensation processing scene, because the information such as the blind area position and the area size of each spliced image can be calculated in advance according to the method described above, after at least one blind area in the second frame image is determined, the corresponding target image area in the first frame image can be determined accordingly, and the target image area is utilized to perform compensation processing on the corresponding blind area in the second frame image, for example, splicing, fusing, overlapping or suspending the target image area to the corresponding blind area in the second frame image, so that the processed second frame image contains the image content of the shooting environment corresponding to the blind area before processing.
Optionally, the first frame image may be directly used to perform complementary merging processing on the second frame image, so that the processed second frame image includes the first frame image content and the second frame image content before processing. The implementation method of the blind area compensation process is not limited in the application, and includes, but is not limited to, the implementation method described above.
In practical application, in order to make the image content of the output target frame image in any direction consistent with the image content of the corresponding direction of the first frame image, after the second frame image obtained by rotating the camera is processed, the processed second frame image can be reversely moved according to the previous rotation angle, and the target frame image is output. The reverse movement may be performed by reversely moving the image area corresponding to the rotation angle at the end of the processed second frame image to the start position, which is not described in detail in the present application.
Based on the description of the embodiments above, after the second frame image is obtained, the camera that rotates may be restored to the initial position before rotation according to the method described above to continuously obtain the target frame image output, that is, the present application controls the camera to repeatedly rotate toward different directions, for example, from the a position to the B position, to obtain the next frame image, and then restore from the B position to the a position, and then obtain the next array image, that is, the present application may control at least one camera to rotate twice per one frame image obtained, and the rotation implementation method may be according to, but not limited to, the method described above.
In still other embodiments, during the process of continuously acquiring each frame image, the present application may also control the camera to continuously rotate in the same direction, so that the view range acquired by the camera is continuously changed, as in the above embodiments, after the second frame image is acquired, the corresponding target frame image may be acquired in the manner described above, and at the same time, the processing may be continuously performed on each frame image acquired subsequently according to the processing method described above, for example, the third frame image (which is equivalent to the new first frame image) is acquired in the current view range of the camera, and the fourth frame image (which is equivalent to the new second frame image) of the camera in the first view range (i.e. the view range of the camera that is not adjusted) or the second view range (i.e. the view range of the camera that is adjusted at least twice) is acquired, then, the target frame image in the different view ranges may be continuously acquired according to the processing method described above, the continuously acquired target frame image may be processed, and the formed video stream may be output, so as to satisfy the requirements of the application of the panoramic quality of the panoramic image, such as video conference, video, product, and the like.
The implementation process of obtaining the corresponding target frame output is similar with respect to how to process any adjacent frame image (e.g. a stitched image), and reference may be made to the description of the corresponding parts of the above embodiments, which is not repeated herein.
Based on the processing method described in each embodiment, in order to improve the processing efficiency, fully utilize processing resources, avoid the influence of motion on image acquisition, and the driving device can control the camera to rotate and adjust the view finding range of the camera in the idle time of inter-frame acquisition. As shown in the schematic diagram of camera driving control shown in fig. 11, a certain time interval is often provided between the camera capturing images of adjacent frames, the application can determine that the time interval between the first frame image and the second frame image is obtained, then, the corresponding driving device can be controlled based on the time interval to complete the adjustment of the view finding range of the camera, the image capturing time curve and the camera rotating speed curve shown in fig. 11 can complete the camera rotating control in the time interval, and the camera position is maintained unchanged in the image capturing time, so that the adverse effect of the driving device on the captured image quality is avoided.
Optionally, for the time interval between the adjacent frame images, the time interval value and the determining method thereof are not limited in the application.
Referring to fig. 12, a schematic structural diagram of an alternative example of a processing apparatus proposed in the present application may include:
a first obtaining module 121, configured to obtain a first frame image, where the first frame image is a stitched image obtained by at least two cameras;
a second obtaining module 122, configured to obtain a second frame image, where the second frame image is an image obtained by adjusting a view range of the camera;
a first processing module 123, configured to process the first frame image with the second frame image to obtain a target frame image;
the data amount carried by the target frame image is larger than the data amount carried by the first frame image.
In still other embodiments, the second obtaining module 122 may include at least one obtaining unit of:
a first obtaining unit, configured to obtain the second frame image if a first object is not included in the first frame image, where the first object is located outside the view-finding ranges of the at least two cameras;
A second obtaining unit, configured to detect that a second object enters a target area in a process of obtaining the first frame image, and obtain the second frame image, where the target area is an area outside a view range of the at least two cameras;
a third obtaining unit, configured to obtain the second frame image when a stitching gap exists in the first frame image;
a fourth obtaining unit, configured to obtain the second frame image if the display parameter of the first area in the first frame image does not meet the corresponding display condition;
and a fifth obtaining unit, configured to obtain an acquisition trigger event, and obtain the second frame image.
Alternatively, for different obtaining units included in the second obtaining module 122, the implementation procedure of obtaining the second frame image by the corresponding obtaining unit may be implemented by the following corresponding obtaining unit, and thus, the second obtaining module 122 may further include:
a sixth obtaining unit configured to obtain a first positional relationship between the first object and a first target camera determined from the at least two cameras, and adjust a view range of the first target camera based at least on the first positional relationship, to obtain a second frame image including the first object;
A seventh obtaining unit configured to obtain a second positional relationship between the first object and the at least two cameras, and adjust a view range of the at least two cameras based at least on the second positional relationship, so as to obtain a second frame image including the first object;
an eighth obtaining unit configured to obtain a third positional relationship between the target area and the camera, and adjust a framing range of at least one of the at least two cameras based at least on the third positional relationship, to obtain a second frame image including the second object;
a ninth obtaining unit, configured to control the at least two cameras to rotate according to the same rotation parameter, so as to obtain a second frame image that includes an area corresponding to the stitching gap;
a tenth obtaining unit, configured to control at least one camera corresponding to the first area to rotate, so as to obtain a second frame image of the first area including the display parameter meeting the corresponding display condition.
Based on the analysis, the first processing module 123 may include:
a target image area determining unit configured to determine a target image area corresponding to an abnormal area of the first frame image in the second frame image;
The fusion processing unit is used for fusing the target image area to the first frame image to obtain a target frame image; the abnormal region of the first frame image comprises at least one region among a prediction region of the first object, the target region, a region where the splicing gap is located and a first region where the display parameter does not accord with the display condition; or alternatively, the first and second heat exchangers may be,
and the complementary merging processing unit is used for carrying out complementary merging processing on the second frame image and the first frame image to obtain a target frame image.
Based on the description of the embodiments above, the apparatus may further include:
the first control module is used for determining a time interval between the first frame image and the second frame image, and controlling the corresponding driving device to finish adjustment of the view finding range of the camera based on the time interval.
Optionally, the apparatus may further include:
the second control module is used for controlling the corresponding driving device to restore the rotated camera to the initial position after the second frame image is obtained so as to continuously obtain the corresponding target frame image and output a video stream formed by a plurality of frames of the target frame image; or alternatively, the first and second heat exchangers may be,
A third control module, configured to obtain a third frame image with a current view range of the camera and obtain a fourth frame image of the camera in the first view range or the second view range after the second frame image is obtained, to continuously obtain target frame images in different view ranges, to process the continuously obtained target frame images by the first processing module to output a corresponding video stream,
the first view finding range is a view finding range of the camera which is not adjusted, and the second view finding range is a view finding range of the camera which is adjusted at least twice.
Optionally, the apparatus may further include:
the second processing module is used for processing the second frame image by utilizing the first frame image, and reversely moving the processed second frame image according to the rotation parameters of the camera for adjusting the view finding range to obtain a target frame image; the data amount carried by the target frame image is larger than the data amount carried by the second frame image.
It should be noted that, regarding the various modules, units, and the like in the foregoing embodiments of the apparatus, the various modules and units may be stored as program modules in a memory, and the processor executes the program modules stored in the memory to implement corresponding functions, and regarding the functions implemented by each program module and the combination thereof, and the achieved technical effects, reference may be made to descriptions of corresponding parts of the foregoing method embodiments, which are not repeated herein.
The present application also provides a computer-readable storage medium on which a computer program can be stored, which computer program can be called and loaded by a processor to implement the steps of the processing method described in the above embodiments.
Finally, it should be noted that, in the embodiments described above, unless the context clearly indicates otherwise, the words "a," "an," "the," and/or "the" are not to be construed as limiting, but rather as including the singular, as well. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus. The inclusion of an element defined by the phrase "comprising one … …" does not exclude the presence of additional identical elements in a process, method, article, or apparatus that comprises an element.
Wherein, in the description of the embodiments of the present application, "/" means or is meant unless otherwise indicated, for example, a/B may represent a or B; "and/or" herein is merely an association relationship describing an association object, and means that three relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist together, and B exists alone. In addition, in the description of the embodiments of the present application, "plurality" means two or more than two.
The terms "first," "second," and the like, herein are used for descriptive purposes only and are not necessarily for distinguishing one operation, element or module from another, and not necessarily for describing or implying any actual such relationship or order between such elements, elements or modules. And is not to be taken as indicating or implying a relative importance or implying that the number of technical features indicated is such that the features defining "first", "second" or "a" may explicitly or implicitly include one or more such features.
In addition, various embodiments in the present specification are described in a progressive or parallel manner, and each embodiment is mainly described in a different manner from other embodiments, and identical and similar parts between the various embodiments are only required to be mutually referred. For the apparatus, system and computer device disclosed in the embodiments, since the apparatus, system and computer device correspond to the methods disclosed in the embodiments, the description is simpler, and the relevant points refer to the description of the method section.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (10)
1. A method of processing, the method comprising:
obtaining a first frame image, wherein the first frame image is a spliced image obtained through at least two cameras;
obtaining a second frame image, wherein the second frame image is an image obtained by adjusting the view finding range of the camera; wherein, the image area with abnormal image area in the first frame image at the corresponding position in the second frame image is normal, and the view finding range of the camera is changed by controlling at least one camera to rotate;
performing compensation processing on the abnormal region in the first frame image by using the second frame image to obtain a target frame image;
the data amount carried by the target frame image is larger than the data amount carried by the first frame image.
2. The method of claim 1, wherein the obtaining a second frame image comprises:
if the first frame image does not contain a first object, obtaining the second frame image, wherein the first object is located outside the view finding range of the at least two cameras; or alternatively, the first and second heat exchangers may be,
if a second object is detected to enter a target area in the process of obtaining the first frame image, obtaining the second frame image, wherein the target area is an area outside the view finding range of the at least two cameras; or alternatively, the first and second heat exchangers may be,
If a splicing gap exists in the first frame image, obtaining the second frame image; or alternatively, the first and second heat exchangers may be,
if the display parameters of the first area in the first frame image do not accord with the corresponding display conditions, obtaining the second frame image; or alternatively, the first and second heat exchangers may be,
and acquiring an acquisition trigger event, and acquiring the second frame image.
3. The method of claim 2, wherein the obtaining a second frame image comprises:
obtaining a first positional relationship between the first object and a first target camera determined from the at least two cameras, and adjusting a view range of the first target camera based at least on the first positional relationship to obtain a second frame image including the first object; or alternatively, the first and second heat exchangers may be,
obtaining a second positional relationship between the first object and the at least two cameras, adjusting the view ranges of the at least two cameras based at least on the second positional relationship to obtain a second frame image comprising the first object; or alternatively, the first and second heat exchangers may be,
obtaining a third positional relationship between the target area and the camera, adjusting a view range of at least one of the at least two cameras based at least on the third positional relationship, to obtain a second frame image including the second object; or alternatively, the first and second heat exchangers may be,
Controlling the at least two cameras to rotate according to the same rotation parameters so as to obtain a second frame image comprising the area corresponding to the splicing gap; or alternatively, the first and second heat exchangers may be,
and controlling at least one camera corresponding to the first area to rotate so as to obtain a second frame image of the first area, wherein the display parameters of the second frame image accord with corresponding display conditions.
4. A method according to any one of claims 1 to 3, further comprising:
and determining a time interval between the first frame image and the second frame image, and controlling a corresponding driving device to finish adjustment of the view finding range of the camera based on the time interval.
5. A method according to any one of claims 1 to 3, further comprising:
after the second frame image is obtained, controlling a corresponding driving device to restore the rotated camera to an initial position so as to continuously obtain a corresponding target frame image, and outputting a video stream formed by a plurality of frames of the target frame image; or alternatively, the first and second heat exchangers may be,
after the second frame image is obtained, a third frame image is obtained according to the current view finding range of the camera, and a fourth frame image of the camera under a first view finding range or a second view finding range is obtained, so that target frame images under different view finding ranges are continuously obtained, the continuously obtained target frame images are processed to output corresponding video streams, wherein the first view finding range is a view finding range of the camera which is not adjusted, and the second view finding range is a view finding range of the camera which is adjusted at least twice.
6. A method according to claim 2 or 3, wherein said processing said first frame image with said second frame image to obtain a target frame image comprises:
determining a target image area corresponding to the abnormal area of the first frame image in the second frame image, and fusing the target image area to the first frame image to obtain a target frame image; the abnormal region of the first frame image comprises at least one region among a prediction region of the first object, the target region, a region where the splicing gap is located and a first region where the display parameter does not accord with the display condition; or alternatively, the first and second heat exchangers may be,
and carrying out complementary merging processing on the second frame image and the first frame image to obtain a target frame image.
7. The method of claim 1, the method further comprising:
processing the second frame image by using the first frame image, and reversely moving the processed second frame image according to the rotation parameters of the camera for adjusting the view finding range to obtain a target frame image; the data amount carried by the target frame image is larger than the data amount carried by the second frame image.
8. A processing apparatus, the apparatus comprising:
The first obtaining module is used for obtaining a first frame image, wherein the first frame image is a spliced image obtained through at least two cameras;
the second obtaining module is used for obtaining a second frame image, wherein the second frame image is an image obtained by adjusting the view finding range of the camera; wherein, the image area with abnormal image area in the first frame image at the corresponding position in the second frame image is normal, and the view finding range of the camera is changed by controlling at least one camera to rotate;
the first processing module is used for carrying out compensation processing on an abnormal region in the first frame image by utilizing the second frame image to obtain a target frame image;
the data amount carried by the target frame image is larger than the data amount carried by the first frame image.
9. A processing system, the system comprising: an image acquisition device and an image processing device, wherein:
the image acquisition equipment comprises at least two cameras positioned at different positions, and the at least two cameras acquire images at the same time;
the image processing apparatus includes:
a first memory for storing a first program implementing the processing method according to any one of claims 1 to 7;
A first processor for loading and executing a first program stored in the first memory, and implementing the processing method according to any one of claims 1 to 7.
10. The system of claim 9, the image processing apparatus further comprising:
at least one driving device for controlling the rotation of the corresponding camera to change the view finding range of the camera;
the second memory is used for storing a second program for realizing the splicing processing of the images acquired by the at least two cameras to obtain a spliced image process;
and the second processor is used for loading and executing the second program to realize the splicing processing of the images acquired by the at least two cameras so as to obtain spliced images.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111446247.2A CN114245006B (en) | 2021-11-30 | 2021-11-30 | Processing method, device and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111446247.2A CN114245006B (en) | 2021-11-30 | 2021-11-30 | Processing method, device and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114245006A CN114245006A (en) | 2022-03-25 |
CN114245006B true CN114245006B (en) | 2023-05-23 |
Family
ID=80752341
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111446247.2A Active CN114245006B (en) | 2021-11-30 | 2021-11-30 | Processing method, device and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114245006B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115499589B (en) * | 2022-09-19 | 2024-09-24 | 维沃移动通信有限公司 | Shooting method, shooting device, electronic equipment and medium |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170180680A1 (en) * | 2015-12-21 | 2017-06-22 | Hai Yu | Object following view presentation method and system |
CN105791701B (en) * | 2016-04-27 | 2019-04-16 | 努比亚技术有限公司 | Image capturing device and method |
CN108780568A (en) * | 2017-10-31 | 2018-11-09 | 深圳市大疆创新科技有限公司 | A kind of image processing method, device and aircraft |
CN108810417A (en) * | 2018-07-04 | 2018-11-13 | 深圳市歌美迪电子技术发展有限公司 | A kind of image processing method, mechanism and rearview mirror |
CN111524067B (en) * | 2020-04-01 | 2023-09-12 | 北京东软医疗设备有限公司 | Image processing method, device and equipment |
CN113099248B (en) * | 2021-03-19 | 2023-04-28 | 广州方硅信息技术有限公司 | Panoramic video filling method, device, equipment and storage medium |
WO2022204854A1 (en) * | 2021-03-29 | 2022-10-06 | 华为技术有限公司 | Method for acquiring blind zone image, and related terminal apparatus |
-
2021
- 2021-11-30 CN CN202111446247.2A patent/CN114245006B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN114245006A (en) | 2022-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6471777B2 (en) | Image processing apparatus, image processing method, and program | |
EP3545686B1 (en) | Methods and apparatus for generating video content | |
CN109005334B (en) | Imaging method, device, terminal and storage medium | |
US10547784B2 (en) | Image stabilization | |
CN104052931A (en) | Image shooting device, method and terminal | |
WO2017149875A1 (en) | Image capture control device, image capture device, and image capture control method | |
CN103986867A (en) | Image shooting terminal and image shooting method | |
CN110365896B (en) | Control method and electronic equipment | |
CN114125179B (en) | Shooting method and device | |
CN112655194A (en) | Electronic device and method for capturing views | |
WO2016125946A1 (en) | Panorama image monitoring system using plurality of high-resolution cameras, and method therefor | |
CN114245006B (en) | Processing method, device and system | |
CN110809885A (en) | Image sensor defect detection | |
JP2010034652A (en) | Multi-azimuth camera mounted mobile terminal apparatus | |
CN104104881A (en) | Shooting method of object and mobile terminal | |
CN108810326B (en) | Photographing method and device and mobile terminal | |
WO2017104102A1 (en) | Imaging device | |
CN110086994A (en) | A kind of integrated system of the panorama light field based on camera array | |
US20200099862A1 (en) | Multiple frame image stabilization | |
JP2007049266A (en) | Picture imaging apparatus | |
JP2017050819A (en) | Generation device for panoramic image data, generation method and program | |
WO2015141185A1 (en) | Imaging control device, imaging control method, and storage medium | |
CN112887653A (en) | Information processing method and information processing device | |
KR101993468B1 (en) | Apparatus and method for stabilizing multi channel image | |
JP2020202503A (en) | Imaging device, computer program, and recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |