CN117459666A - Image processing method and image processor - Google Patents

Image processing method and image processor Download PDF

Info

Publication number
CN117459666A
CN117459666A CN202311401196.0A CN202311401196A CN117459666A CN 117459666 A CN117459666 A CN 117459666A CN 202311401196 A CN202311401196 A CN 202311401196A CN 117459666 A CN117459666 A CN 117459666A
Authority
CN
China
Prior art keywords
video
picture
image
parameters
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311401196.0A
Other languages
Chinese (zh)
Inventor
王勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xinchuanhui Electronic Technology Co ltd
Original Assignee
Xinchuanhui Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xinchuanhui Electronic Technology Co ltd filed Critical Xinchuanhui Electronic Technology Co ltd
Priority to CN202311401196.0A priority Critical patent/CN117459666A/en
Publication of CN117459666A publication Critical patent/CN117459666A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

An image processing method and an image processor relate to the technical field of image processing. In the method, a first video and a second video are acquired from a first preset memory, and the first video and the second video are spliced to generate a first target video, wherein the first video is a video acquired by a first camera and stored in the first preset memory, and the second video is a video acquired by a second camera and stored in a preset memory; receiving an instruction from a controller indicating to adjust a plurality of parameters of a first video; based on the instruction, adjusting a first parameter of a plurality of parameters of the first video to generate a third video; and based on the adjustment result, splicing the third video with the second video, generating a first second target video, and presenting the first second target video. By implementing the technical scheme provided by the application, the effect of improving the video splicing efficiency is achieved.

Description

Image processing method and image processor
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and an image processor.
Background
In real life, in order to better satisfy user experience, it is often necessary to view panoramic video, but the shooting angle of a camera is limited, and it is difficult to shoot panoramic video, so that a plurality of segments of video needs to be spliced by a video splicing technology, wherein the video splicing technology is a technology of splicing a plurality of segments of video with overlapping portions into a continuous segment of video.
In the existing image stitching method, the shot video clips are generally required to be acquired from each camera in sequence, and then panoramic video is generated through video stitching software, so that the panoramic video generation efficiency is low; in addition, when some video clips are required to be adjusted after the video clips are spliced, all the video clips are required to be adjusted and displayed after the video clips are spliced, so that a user cannot judge whether an input adjustment instruction is reasonable according to the adjusted display effect in real time, and then the user needs to adjust for multiple times to achieve the expected display effect.
Therefore, the existing video stitching method needs to consume more time for workers, and how to improve the video stitching efficiency becomes a problem to be solved.
Disclosure of Invention
The application provides an image processing method which can improve the visual quality of spliced video.
In a first aspect, the present application provides an image processing method, including: acquiring a first video and a second video from a first preset memory, and splicing the first video and the second video to generate a first target video, wherein the first video is a video acquired by a first camera and stored in the first preset memory, and the second video is a video acquired by a second camera and stored in the preset memory; receiving an instruction from a controller indicating to adjust a plurality of parameters of a first video; based on the instruction, adjusting a first parameter of a plurality of parameters of the first video to generate a third video; and based on the adjustment result, splicing the third video with the second video, generating a first second target video, and presenting the first second target video.
By adopting the technical scheme, the first video and the second video can be directly acquired at the first preset memory, so that the first target video can be quickly generated, any one of a plurality of parameters corresponding to the first video is adjusted according to the display effect of the first target video to generate the third video, the third video and the second video are spliced to generate and present the second target video, the user can check the video splicing effect generated after the parameters are adjusted in real time, and accordingly, the user can determine whether the plurality of parameters need to be adjusted again according to the splicing effect, and panoramic video reaching the expected display effect can be quickly generated.
Optionally, after the second target video is generated, adjusting the rest parameters except the first parameter in the third video to generate a fourth video; and based on the adjustment result, splicing the fourth video with the second video, generating a third target video, and presenting the third target video.
By adopting the technical scheme, the user can adjust other parameters according to the display effect of the second target video to generate a fourth video, and then splice the fourth video and the second video to present a further optimized third target video, so that the user can view the video display effect after each parameter adjustment in real time.
Optionally, acquiring the first video and the second video from a first preset memory, and stitching the first video and the second video to generate a first target video, including: determining a sequence of the first video and the second video based on a relative positional relationship between the first camera and the second camera; determining a first picture displayed at a first moment from a first video, and determining a second picture from a second video, wherein the second picture is a picture displayed at the same moment with the first picture in the second video; splicing the first picture and the second picture to generate a spliced picture at a first moment; detecting whether the pictures in the first video and the pictures in the second video are spliced; and when the pictures in the first video and the pictures in the second video are spliced, generating a first target video.
By adopting the technical scheme, after the display sequence of the first video and the second video is determined according to the position relation of the first camera and the second camera, the first video and the second video are spliced according to the display sequence to generate the first target video, so that the step of generating the first target video is simplified, and the speed of generating the first target video is improved.
Optionally, the process of splicing the first picture and the second picture to generate the spliced picture at the first moment may be: respectively extracting a first image feature of a first picture and a second image feature of a second picture, wherein the similarity between the first image feature and the second image feature is greater than or equal to a preset threshold; and setting the first picture and the second picture in the same coordinate system based on the position information of the image corresponding to the first image characteristic in the first picture and the position information of the image corresponding to the second image characteristic in the second picture so as to generate a spliced picture.
By adopting the technical scheme, the first picture and the second picture are arranged in the same coordinate system according to the similarity degree of the first image characteristic and the second image characteristic, and the overlapping area of the first picture and the second picture can be accurately determined, so that a spliced picture with better picture splicing effect is generated.
Optionally, before extracting the first image feature of the first frame and the second image feature of the second frame, respectively, the method further includes: reading preset calibration parameters from a second preset memory, wherein the preset calibration parameters are used for representing parameters of the first video and the second video recording area; and correcting the image deformity of the first picture and the second picture according to preset calibration parameters.
By adopting the technical scheme, after the image deformity correction is carried out on the first picture and the second picture according to the acquired preset calibration parameters, the need of extracting image features can be reduced when the image is spliced, so that the image splicing speed is increased.
Optionally, before the first video and the second video are acquired from the first preset memory, the first preset memory is initialized to delete the data stored in the first preset memory.
By adopting the technical scheme, the originally stored data is deleted before the first preset memory stores the first video and the second video, so that the process of searching the first video and the second video from all stored data can be avoided when the image processor acquires the first video and the second video, the data stored in the first preset memory can be directly read, and the efficiency of acquiring the first video and the second video is improved.
In a second aspect of the present application, there is provided an image processor comprising: the acquisition module is used for acquiring a first video and a second video from a first preset memory, and splicing the first video and the second video to generate a first target video, wherein the first video is a video acquired by a first camera and stored in the first preset memory, and the second video is a video acquired by a second camera and stored in the first preset memory; a receiving module for receiving an instruction from the controller indicating to adjust a plurality of parameters of the first video; the generation module is used for adjusting a first parameter in a plurality of parameters of the first video based on the instruction to generate a third video; and the presentation module is used for splicing the third video with the second video based on the adjustment result, generating a second target video and presenting the second target video.
In a third aspect of the present application, there is provided an image processing system comprising a plurality of cameras, a storage device, a controller, and an image processor; each of the plurality of cameras for capturing video and storing the captured video to a storage device; the storage device is used for storing videos acquired by the cameras; the image processor acquires the first video and the second video from the storage device and splices the first video and the second video; a controller for sending instructions to the image processor indicating to adjust a plurality of parameters of the first video; and the image processor is used for adjusting one parameter of a plurality of parameters in the first video, and splicing the first video with the second video based on an adjustment result to generate a target video.
In a fourth aspect of the present application, there is provided an electronic device comprising a processor (501), a memory (505), a user interface (503) and a network interface (504), the memory (505) for storing instructions, the user interface (503) and the network interface (504) for communicating to other devices, the processor (501) for executing the instructions stored in the memory (505) to cause the electronic device (500) to perform a method according to any of the first aspects.
In a fifth aspect of the present application there is provided a computer readable storage medium storing instructions which, when executed, perform the method steps of any of the first aspects.
In summary, one or more technical solutions provided in the embodiments of the present application at least have the following technical effects or advantages:
1. the first video and the second video are directly obtained from the first preset memory, the first video and the second video can be prevented from being respectively collected from video sources, so that the speed of generating the first target video is improved, a plurality of parameters of the first video are adjusted according to the display effect of the first target video, and the generated second target video is directly presented after one parameter is adjusted, so that a user can conveniently check the adjusted display effect, and the speed of generating the target video reaching the expected display effect is improved.
2. According to the position information of the first image feature and the second image feature, the first picture and the second picture are arranged in the same coordinate system, the splicing position of the first picture and the second picture is determined, and then the overlapping area of the first picture and the second picture is deleted, so that a spliced picture with better picture splicing effect can be generated.
Drawings
Fig. 1 is a schematic architecture diagram of an image processing system provided in an embodiment of the present application.
Fig. 2 is a flow chart of a video stitching method disclosed in an embodiment of the present application.
Fig. 3 is a schematic view of a camera mounting position disclosed in an embodiment of the present application.
Fig. 4 is a schematic structural diagram of an image processor according to an embodiment of the present application.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present application, but not all embodiments.
In the description of embodiments of the present application, words such as "for example" or "for example" are used to indicate examples, illustrations or descriptions. Any embodiment or design described herein as "such as" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "or" for example "is intended to present related concepts in a concrete fashion.
In the description of the embodiments of the present application, the term "plurality" means two or more. For example, a plurality of systems means two or more systems, and a plurality of screen terminals means two or more screen terminals. Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating an indicated technical feature. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
For the requirements of monitoring in scenes like command dispatching centers, wharf sites and vehicle monitoring, single-camera monitoring cannot meet the application requirements of large-view-field conditions, but scenes shot by a plurality of cameras are simply spliced, obvious visual discontinuity of spliced videos, such as uneven splicing seams, cutting of images, brightness change and the like, can be caused due to the differences of factors such as shooting conditions of different video clips and camera parameters, and the detection and processing of subsequent targets are not facilitated.
In order to solve the above technical problems, the prior art proposes to collect camera parameters of a plurality of cameras and a plurality of captured video segments, store the camera parameters and video data to generate a video file, process the video file through video stitching software and the collected camera parameters, generate a panoramic video and display the panoramic video, check the display effect of the displayed panoramic video, if the display effect of the panoramic video does not reach the expected display requirement, adjust the plurality of display parameters of the panoramic video, generate a new panoramic video, and then display the new panoramic video, and then judge again whether the display effect of the panoramic video after adjustment reaches the expected display effect, if still does not reach the expected display requirement, repeat the above adjustment process until reaching the expected display effect.
However, the video stitching scheme proposed in the prior art has many steps, and when adjusting the display effect of the panoramic video, it is necessary to save and display the generated new panoramic video multiple times.
In order to avoid the technical problems in the prior art, the application provides a video splicing system, which can improve video splicing efficiency.
Fig. 1 shows a schematic architecture diagram of a video stitching system suitable for use in embodiments of the present application. In the scenario shown in fig. 1, the video stitching system includes four cameras: camera 101, camera 102, camera 103, camera 104, controller 105, image processor 106, and storage 107. It can be appreciated that the video stitching system 100 shown in the embodiments of the present application may include more or fewer cameras and peripheral devices, and the number of cameras is not specifically limited in the embodiments of the present application, and is set based on the requirements of the application scenario. The following describes an example in which the video stitching system 100 includes four cameras.
Alternatively, the camera 101, the camera 102, the camera 103, and the camera 104 may establish communication with the image processor 106 in a wired or wireless manner, respectively, and the image processor 106 may establish communication with the controller 105 in a wired or wireless manner.
In this embodiment, after the camera 101 is installed at the preset position, the camera is used to collect video at the preset position, and store the video to the storage device 107 to generate a video file, so that the image processor 106 can read the collected video. The purposes and communication manners of the camera 102, the camera 103 and the camera 104 are as described above, and will not be described here.
The controller 105: the system consists of a program counter, an instruction register, an instruction decoder, a time sequence generator and an operation controller, and is used for completing the coordination and the command of the operation of the whole computer system. In this embodiment of the present application, the controller 105 may be a central processing unit (Central Processing Unit, CPU), which is configured to control the image processor 106 to read video files stored in the memory component and camera parameters of each camera according to a preset program, and then generate a stitched video based on the read video files and camera parameters, and after generating the stitched video, may also respond to a parameter adjustment instruction sent by a user, and control the image processor 106 to adjust display parameters of the stitched video to generate a target video.
Image processor 106: in the embodiment of the present application, the image processor 106 may be an image signal processor (Image Signal Processing, ISP) configured to read the video files and camera parameters stored in each camera stored in the storage device according to a preset time interval, generate a stitched video according to the video files and camera parameters, and generate a target video in response to a parameter adjustment instruction sent by the controller 105 after generating the stitched video.
Storage device 107: for storing camera parameters and captured video clips corresponding to the camera 101, the camera 102, the camera 103 and the camera 104, respectively.
In combination with the above, it can be easily known that in the video stitching system provided by the present application, the image processor 106 establishes communication with the camera 101, the camera 102, the camera 103 and the camera 104, and in response to the instruction sent by the controller 105, the video file and the camera parameter stored by each camera are obtained from the storage device 107, the stitched video is generated according to the video file and the camera parameter, the controller 105 can display the stitched video through any type of video display device, the user determines whether to adjust the display parameter of the stitched video according to the stitching effect of the displayed stitched video, and if the display parameter of the stitched video is adjusted, the parameter adjustment instruction sent to the image processor 106 by the controller 105 generates the target video.
It should be understood that the controller 105 and the image processor 106 may be installed in the same terminal device or may be installed in different terminal devices, which is not particularly limited in this application.
Referring to fig. 2, fig. 2 is a flowchart 200 of an image processing method according to an embodiment of the present application, based on the video stitching system shown in fig. 1. The image processing method is applied to the image processor 106 shown in fig. 1.
Hereinafter, an image processing method according to an embodiment of the present application will be described in detail with reference to fig. 2.
Step S201: and acquiring a first video and a second video from a first preset memory, and splicing the first video and the second video to generate a first target video, wherein the first video is a video collected by a first camera and stored in the first preset memory, and the second video is a video collected by a second camera and stored in the first preset memory.
The first predetermined memory may be understood as the storage device 107.
The first video and the second video may be understood as video clips photographed from different positions, and there is an overlapping region of the first video and the second video.
Optionally, the image processor is configured to periodically fetch the stored video from the first preset memory. For example, the image processor acquires the stored video from the first preset memory every 1 second.
It should be appreciated that the image processor may first display the first target video via the display device after generating the first target video so that the user may view the stitching effect.
It should also be understood that the first preset memory may also be used to store video clips shot by other cameras, which is not specifically limited in this application.
In an alternative embodiment, the first preset memory is initialized to delete the data stored in the first preset memory before the first video and the second video are acquired from the first preset memory.
In this embodiment, before the first preset memory stores the first video and the second video, the originally stored data is deleted, so that when the image processor acquires the first video and the second video, the process of searching the first video and the second video from all the stored data can be avoided, so that the data stored in the first preset memory can be directly read, and the efficiency of acquiring the first video and the second video is improved.
In an alternative embodiment, the image processor acquires a first video and a second video from a first preset memory, and splices the first video and the second video to generate a first target video, which includes: determining a sequence of the first video and the second video based on a relative positional relationship between the first camera and the second camera; determining a first picture displayed at a first moment from a first video, and determining a second picture from a second video, wherein the second picture is a picture displayed at the same moment with the first picture in the second video; splicing the first picture and the second picture to generate a spliced picture at a first moment; detecting whether the pictures in the first video and the pictures in the second video are spliced; and when the pictures in the first video and the pictures in the second video are spliced, generating a first target video.
The first picture may be understood as a picture displayed at any one time in the first video.
Taking the camera 301 shown in fig. 3 as a first camera, the camera 302 as a second camera, and the first video and the second video respectively include three frames of pictures, where the first frame of picture is a picture displayed by 10:01, the second frame of picture is a picture displayed by 10:02, the third frame of picture is a picture displayed by 10:03, and the first picture and the second picture refer to a picture displayed by the first video and the second video at 10:01, respectively. As shown in fig. 3, the relative positional relationship between the first camera and the second camera is that the first camera is located at the left rear side of the second camera, and has an included angle of 90 degrees, when viewing the video from left to right, the first video is displayed first, then the second video is displayed, after the first picture and the second picture are selected from the first video and the second video, the first picture and the second picture are spliced according to the display sequence in the sequence from left to right, so as to generate a spliced picture of 10:01, the steps of generating a spliced picture of 10:01 are repeated, the second frame picture and the third frame picture are selected from the first video and the second video respectively, a spliced picture of 10:02 and a spliced picture of 10:03 are generated according to the selected pictures, and then the spliced picture of 10:01, the spliced picture of 10:02 and the spliced picture of 10:03 are synthesized according to the time sequence of each spliced picture, so as to generate the first target video.
In this embodiment, the image processor determines the display sequence of the first video and the second video according to the positional relationship between the first camera and the second camera, and then splices the first video and the second video according to the display sequence to generate the first target video, thereby simplifying the step of generating the first target video and further improving the speed of generating the first target video.
Optionally, the process of splicing the first picture and the second picture to generate the spliced picture at the first moment may be: respectively extracting a first image feature of a first picture and a second image feature of a second picture, wherein the similarity between the first image feature and the second image feature is greater than or equal to a preset threshold; and setting the first picture and the second picture in the same coordinate system based on the position information of the image corresponding to the first image characteristic in the first picture and the position information of the image corresponding to the second image characteristic in the second picture so as to generate a spliced picture.
The first image feature is used to represent information of the content displayed on the first screen. For example, the first image feature may be a color histogram, a luminance histogram, a texture feature, a spatial relationship feature, or the like of the first image, or may be a corner point, which is not specifically limited in this application.
The second image feature is used for information representing the content of the second screen display.
The preset threshold is a value for judging whether the image features represented by the first image feature and the second image feature are identical.
The coordinate system may be a world coordinate system, a camera coordinate system, or an image coordinate system, which is not particularly limited in this application.
The image processor extracts image features of the first image and image features of the second image, compares the image features of the first image with the image features of the second image, determines first image features and second image features with similarity greater than or equal to a preset threshold, sets the first image and the second image in the same coordinate system according to position information of the first image features and the second image features, and determines overlapping portions of the first image and the second image to generate the spliced image.
In this example, the image processor sets the first frame and the second frame in the same coordinate system according to the similarity of the first image feature and the second image feature, so that the overlapping area of the first frame and the second frame can be determined more accurately, and a spliced frame with better frame splicing effect is generated.
It should be understood that, after determining the first image feature and the second image feature, the image processor may directly delete the region where the first image feature is located in the first frame and the region where the second image feature is located in the second frame, and then splice the first frame and the second frame after deleting the partial region to generate a spliced frame.
In some embodiments, the image processor further comprises, prior to extracting the first image feature of the first picture and the second image feature of the second picture, respectively: reading preset calibration parameters from a second preset memory, wherein the preset calibration parameters are used for representing parameters of the first video and the second video recording area; and correcting the image deformity of the first picture and the second picture according to preset calibration parameters.
The second preset memory is used for representing a device for storing preset calibration parameters. In this embodiment of the present application, the second preset memory and the first preset memory may be the same memory device or may be different memory devices, which is not limited in this application.
The preset calibration parameters may be parameters for representing coordinates and rotation angles of the camera. In this embodiment of the present application, the preset calibration parameters may be parameters including coordinates and rotation angle of the first camera and coordinates and rotation angle of the second camera.
The image processor reads the preset calibration parameters from the second preset memory, determines recording areas of the first video and the second video according to the preset calibration parameters, and cuts, moves or rotates content displayed on a first picture and a second picture in the first video and the second video according to the determined recording areas, so that each frame of image in the first video and each frame of image in the second video meet preset video recording requirements.
In this example, after the image processor corrects the image deformity of the first frame and the second frame according to the obtained preset calibration parameters, the need for extracting the image features during image stitching can be reduced, so as to increase the image stitching speed.
In other embodiments, after the image processor acquires the first video and the second video from the first preset memory, the first video and the second video are directly displayed, and the user may adjust display parameters of the first video and the second video according to display effects of the first video and the second video, so as to correct image deformity of the first frame and the second frame.
Step S202: an instruction is received from a controller indicating an adjustment to a plurality of parameters of a first video.
The plurality of parameters are used to represent display parameters of the first video. For example, the plurality of parameters includes video image size, video display position, brightness, pixels.
As described above, the controller may fetch instructions, analyze instructions, and execute instructions.
In an exemplary embodiment, the user moves up the display position of the first video through the touch screen, and the controller receives and analyzes the adjustment instruction sent from the touch screen to control the image processor to send the first video up-movement instruction.
It should be understood that the user may also input the adjustment command through the mouse and the adjustment command through the keyboard, which is not specifically limited in this application.
Step S203: based on the instruction, a first parameter of a plurality of parameters of the first video is adjusted to generate a third video.
The first parameter may be any one of a plurality of parameters. For example, the first parameter may be a display position parameter.
The third video is used for representing the video after any one parameter of the plurality of parameters corresponding to the first video is changed.
Illustratively, the image processor receives the adjustment instruction in real time and generates the third video in real time according to the adjustment instruction.
Step S204: and based on the adjustment result, splicing the third video with the second video, generating a second target video, and presenting the second target video.
Illustratively, the same steps as described above for generating the first target video. The image processor splices the pictures displayed by the third video and the second video at the same time to generate spliced pictures, synthesizes the spliced pictures according to time sequence and generates a second target video so as to generate spliced video changed according to the adjustment instruction.
It should be understood that the image processor may also adjust the second video first, take the adjusted second video as the third video, and then splice the third video and the first video to generate the second target video.
According to the image processing method, the image processor can directly acquire the first video and the second video at the first preset memory to quickly generate the first target video, any one of the multiple parameters corresponding to the first video is adjusted to generate the third video according to the display effect of the first target video, the third video and the second video are spliced to generate and present the second target video, so that a user can check the video splicing effect generated after the parameters are adjusted in real time, and accordingly the user can determine whether the multiple parameters need to be adjusted again according to the splicing effect, and panoramic video achieving the expected display effect is quickly generated.
In an alternative embodiment, the image processor adjusts the rest parameters except the first parameter in the third video after generating the second target video to generate a fourth video; and based on the adjustment result, splicing the fourth video with the second video, generating a third target video, and presenting the third target video.
It is readily understood that the remaining parameters are used to represent parameters other than the first parameter of the plurality of parameters. For example, the remaining parameter may be video image size.
After the second target video is displayed, the user adjusts any one of the other parameters corresponding to the third video according to the display requirement, generates a fourth video, and splices the fourth video and the second video (the splicing step is the splicing step of the first video and the second video), so as to generate and present the third target video.
It should be understood that, after the second target video is generated, the image processor may further continuously adjust the first parameter, and splice the first video after the first parameter is adjusted again with the second video to generate the third target video, which is not limited in this application.
It should be further understood that after the third video is generated, the image processor may further adjust a plurality of parameters corresponding to the fourth video to generate a new target video, or may adjust a plurality of parameters corresponding to the second video to generate a new video, and then splice the new video and the fourth video to generate a new target video.
In the embodiment of the application, the user can adjust the other parameters according to the display effect of the second target video to generate the fourth video, and then splice the fourth video and the second video to present the further optimized third target video, so that the user can view the video display effect after each parameter adjustment in real time.
When the video clips obtained from the first preset memory by the image processor include the first video and the second video respectively shot by the camera 101 and the camera 102, and the video clips respectively shot by the camera 103 and the camera 104, the method for splicing the four video segments is as described above for splicing the first video and the second video, that is, the first video and the second video are spliced, the video clips shot by the second video and the 103 camera are spliced, the video clips shot by the 103 camera and the video clips shot by the 104 camera are spliced, and then a new panoramic video is generated according to the adjustment instruction, which is not repeated in the application.
It will be appreciated that the image processor, in order to implement the functions described in fig. 2, includes corresponding hardware and/or software modules that perform the respective functions. The steps of the examples described in connection with the embodiments disclosed herein may be embodied in hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application in conjunction with the embodiments, but such implementation is not to be considered as outside the scope of this application.
The present embodiment may divide the functional modules of the image processor according to the above-described method example, for example, each of the different functional modules may be divided corresponding to each of the functions, or two or more functions may be integrated in one processing module. The integrated modules described above may be implemented in hardware. It should be noted that, in this embodiment, the division of the modules is schematic, only one logic function is divided, and another division manner may be implemented in actual implementation.
Fig. 4 shows a possible schematic diagram of the image processor 400 involved in the above-described embodiment in the case of dividing the respective functional modules with the respective functions, the image processor 400 including: an obtaining module 401, configured to obtain a first video and a second video from a first preset memory, and splice the first video and the second video to generate a first target video, where the first video is collected by a first camera and stored in the first preset memory, and the second video is collected by a second camera and stored in the first preset memory; a receiving module 402, configured to receive, from a controller, an instruction indicating to adjust a plurality of parameters of a first video; a generating module 403, configured to adjust a first parameter of a plurality of parameters of the first video based on the instruction, and generate a third video; and a presenting module 404, configured to splice the third video with the second video based on the adjustment result, generate a second target video, and present the second target video.
In an optional implementation manner of this embodiment of the present application, the generating module 403 is further configured to adjust parameters other than the first parameter in the third video to generate a fourth video; and based on the adjustment result, splicing the fourth video with the second video, generating a third target video, and presenting the third target video.
In an optional implementation manner of this embodiment of the present application, the obtaining module 401 is configured to determine a sequence of the first video and the second video based on a relative positional relationship between the first camera and the second camera; determining a first picture displayed at a first moment from a first video, and determining a second picture from a second video, wherein the second picture is a picture displayed at the same moment with the first picture in the second video; splicing the first picture and the second picture to generate a spliced picture at a first moment; detecting whether the pictures in the first video and the pictures in the second video are spliced; and when the pictures in the first video and the pictures in the second video are spliced, generating a first target video.
In an optional implementation manner of this embodiment of the present application, the obtaining module 401 is configured to extract a first image feature of a first picture and a second image feature of a second picture, where a similarity between the first image feature and the second image feature is greater than or equal to a preset threshold; and setting the first picture and the second picture in the same coordinate system based on the position information of the image corresponding to the first image characteristic in the first picture and the position information of the image corresponding to the second image characteristic in the second picture so as to generate a spliced picture.
In an optional implementation manner of this embodiment of the present application, the obtaining module 401 is further configured to read a preset calibration parameter from a second preset memory, where the preset calibration parameter is used to represent parameters of the first video and the second video recording area; and correcting the image deformity of the first picture and the second picture according to preset calibration parameters.
In an optional implementation manner of this embodiment of the present application, the obtaining module 401 is further configured to initialize a first preset memory to delete data stored in the first preset memory.
The application also discloses electronic equipment. Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to the disclosure in an embodiment of the present application. The electronic device 500 may include: at least one processor 501, at least one network interface 504, a user interface 503, a memory 505, at least one communication bus 502.
Wherein a communication bus 502 is used to enable connected communications between these components.
The user interface 503 may include a Display screen (Display) and a Camera (Camera), and the optional user interface 503 may further include a standard wired interface and a standard wireless interface.
The network interface 504 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface), among others.
Wherein the processor 501 may include one or more processing cores. The processor 501 connects various parts throughout the server using various interfaces and lines, performs various functions of the server and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 505, and invoking data stored in the memory 505. Alternatively, the processor 501 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 501 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing the content required to be displayed by the display screen; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 501 and may be implemented by a single chip.
The Memory 505 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Optionally, the memory 505 comprises a non-transitory computer readable medium (non-transitory computer-readable storage medium). Memory 505 may be used to store instructions, programs, code sets, or instruction sets. The memory 505 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the above-described various method embodiments, etc.; the storage data area may store data or the like involved in the above respective method embodiments. The memory 505 may also optionally be at least one storage device located remotely from the processor 501. Referring to fig. 5, an operating system, a network communication module, a user interface module, and an application program of a test method of a multi-core cable may be included in a memory 505 as a computer storage medium.
In the electronic device 500 shown in fig. 5, the user interface 503 is mainly used for providing an input interface for a user, and acquiring data input by the user; and processor 501 may be used to invoke an application in memory 505 that stores a multi-core cable test method that, when executed by one or more processors 501, causes electronic device 500 to perform the method as described in one or more of the embodiments above. It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided herein, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, such as a division of units, merely a division of logic functions, and there may be additional divisions in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some service interface, device or unit indirect coupling or communication connection, electrical or otherwise.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned memory includes: various media capable of storing program codes, such as a U disk, a mobile hard disk, a magnetic disk or an optical disk.
The foregoing is merely exemplary embodiments of the present disclosure and is not intended to limit the scope of the present disclosure. That is, equivalent changes and modifications are contemplated by the teachings of this disclosure, which fall within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure.
This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a scope and spirit of the disclosure being indicated by the claims.

Claims (10)

1. An image processing method, comprising:
acquiring a first video and a second video from a first preset memory, and splicing the first video and the second video to generate a first target video, wherein the first video is a video acquired by a first camera and stored in the first preset memory, and the second video is a video acquired by a second camera and stored in the first preset memory;
receiving instructions from a controller indicating adjustments to a plurality of parameters of the first video;
based on the instruction, adjusting a first parameter of a plurality of parameters of the first video to generate a third video;
and based on the adjustment result, splicing the third video with the second video, generating a second target video, and presenting the second target video.
2. The method of claim 1, wherein after the generating the second target video, the method further comprises:
adjusting the rest parameters except the first parameter in the third video to generate a fourth video;
and based on the adjustment result, splicing the fourth video with the second video, generating a third target video, and presenting the third target video.
3. The method of claim 1, wherein retrieving a first video and a second video from a first preset memory, and stitching the first video and the second video to generate a first target video, comprises:
determining a sequence of the first video and the second video based on a relative positional relationship between the first camera and the second camera;
determining a first picture displayed at a first moment from the first video, and determining a second picture from the second video, wherein the second picture is a picture displayed at the same moment as the first picture in the second video;
splicing the first picture and the second picture to generate a spliced picture at the first moment;
detecting whether the pictures in the first video and the pictures in the second video are spliced;
And when the pictures in the first video and the pictures in the second video are spliced, generating the first target video.
4. A method according to claim 3, wherein the splicing the first picture and the second picture to generate the spliced picture at the first time comprises:
respectively extracting a first image feature of the first picture and a second image feature of the second picture, wherein the similarity between the first image feature and the second image feature is greater than or equal to a preset threshold;
and setting the first picture and the second picture in the same coordinate system based on the position information of the image corresponding to the first image characteristic in the first picture and the position information of the image corresponding to the second image characteristic in the second picture so as to generate the spliced picture.
5. The method of claim 4, wherein prior to the extracting the first image feature of the first picture and the second image feature of the second picture, respectively, the method further comprises:
reading preset calibration parameters from a second preset memory, wherein the preset calibration parameters are used for representing parameters of the first video and the second video recording area;
And correcting image deformity of the first picture and the second picture according to the preset calibration parameters.
6. The method of claim 1, wherein prior to the retrieving the first video and the second video from the first preset memory, the method further comprises:
initializing the first preset memory to delete the data stored in the first preset memory.
7. An image processor, comprising:
the acquisition module is used for acquiring a first video and a second video from a first preset memory, and splicing the first video and the second video to generate a first target video, wherein the first video is a video acquired by a first camera and stored in the first preset memory, and the second video is a video acquired by a second camera and stored in the first preset memory;
a receiving module for receiving an instruction from a controller indicating to adjust a plurality of parameters of the first video;
the generation module is used for adjusting a first parameter in a plurality of parameters of the first video based on the instruction to generate a third video;
and the presentation module is used for splicing the third video with the second video based on the adjustment result, generating a second target video and presenting the second target video.
8. An image processing system comprising a plurality of cameras, a storage device, a controller, and an image processor;
each of the plurality of cameras for capturing video and storing the captured video to the storage device;
the storage device is used for storing videos acquired by the cameras;
the image processor acquires a first video and a second video from the storage device, and splices the first video and the second video;
the controller is used for sending instructions for indicating adjustment of a plurality of parameters of the first video to the image processor;
and the image processor adjusts one parameter of the multiple parameters in the first video, and based on an adjustment result, the image processor is spliced with the second video to generate a target video.
9. An electronic device comprising a processor (501), a memory (505), a user interface (503) and a network interface (504), the memory (505) being configured to store instructions, the user interface (503) and the network interface (504) being configured to communicate to other devices, the processor (501) being configured to execute the instructions stored in the memory (505) to cause the electronic device (500) to perform the method according to any of claims 1-6.
10. A computer readable storage medium storing instructions which, when executed, perform the method steps of any of claims 1-6.
CN202311401196.0A 2023-10-23 2023-10-23 Image processing method and image processor Pending CN117459666A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311401196.0A CN117459666A (en) 2023-10-23 2023-10-23 Image processing method and image processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311401196.0A CN117459666A (en) 2023-10-23 2023-10-23 Image processing method and image processor

Publications (1)

Publication Number Publication Date
CN117459666A true CN117459666A (en) 2024-01-26

Family

ID=89586743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311401196.0A Pending CN117459666A (en) 2023-10-23 2023-10-23 Image processing method and image processor

Country Status (1)

Country Link
CN (1) CN117459666A (en)

Similar Documents

Publication Publication Date Title
CN110012209B (en) Panoramic image generation method and device, storage medium and electronic equipment
US10692197B2 (en) Systems and techniques for automatic image haze removal across multiple video frames
CN113126937B (en) Display terminal adjusting method and display terminal
US20180048810A1 (en) Image processing apparatus, image generation method, and non-transitory computer-readable storage medium
CN110366001B (en) Method and device for determining video definition, storage medium and electronic device
US20100085420A1 (en) Image processing apparatus and method
CN111479059B (en) Photographing processing method and device, electronic equipment and storage medium
US20070024710A1 (en) Monitoring system, monitoring apparatus, monitoring method and program therefor
US11127141B2 (en) Image processing apparatus, image processing method, and a non-transitory computer readable storage medium
US11200653B2 (en) Local histogram matching with global regularization and motion exclusion for multi-exposure image fusion
CN111013131A (en) Delayed data acquisition method, electronic device, and storage medium
CN111371985A (en) Video playing method and device, electronic equipment and storage medium
CN111885371A (en) Image occlusion detection method and device, electronic equipment and computer readable medium
CN117459666A (en) Image processing method and image processor
CN110545375B (en) Image processing method, image processing device, storage medium and electronic equipment
CN115268650A (en) Picture screen capturing method and device, head-mounted virtual reality equipment and storage medium
CN112672057B (en) Shooting method and device
KR102074072B1 (en) A focus-context display techinique and apparatus using a mobile device with a dual camera
CN112085002A (en) Portrait segmentation method, portrait segmentation device, storage medium and electronic equipment
JP2021118522A (en) Image processing device, image processing method, and monitoring system
CN117170418B (en) Cloud deck control method, device, equipment and storage medium
US20230291865A1 (en) Image processing apparatus, image processing method, and storage medium
CN110876016B (en) Image processing method, apparatus and storage medium
US20240046608A1 (en) 3d format image detection method and electronic apparatus using the same method
CN118276671A (en) Image display method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination