CN111225126A - Multi-channel video stream generation method and device - Google Patents
Multi-channel video stream generation method and device Download PDFInfo
- Publication number
- CN111225126A CN111225126A CN201811404859.3A CN201811404859A CN111225126A CN 111225126 A CN111225126 A CN 111225126A CN 201811404859 A CN201811404859 A CN 201811404859A CN 111225126 A CN111225126 A CN 111225126A
- Authority
- CN
- China
- Prior art keywords
- image
- images
- shooting parameters
- sequence
- video stream
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000012545 processing Methods 0.000 claims description 33
- 238000000926 separation method Methods 0.000 claims description 6
- 238000002372 labelling Methods 0.000 claims 1
- 230000006870 function Effects 0.000 description 18
- 238000010586 diagram Methods 0.000 description 15
- 238000004458 analytical method Methods 0.000 description 13
- 238000003860 storage Methods 0.000 description 13
- 238000004590 computer program Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 8
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000013461 design Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000003707 image sharpening Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 101000969688 Homo sapiens Macrophage-expressed gene 1 protein Proteins 0.000 description 1
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 102100021285 Macrophage-expressed gene 1 protein Human genes 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/54—Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2625—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of images from a temporal image sequence, e.g. for a stroboscopic effect
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computing Systems (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the application provides a method and a device for generating a multi-channel video stream. The method provided by the embodiment of the application can set different shooting parameters for the image sensor, and the image sensor is multiplexed in a time-sharing mode to obtain the image data of the different shooting parameters, so that multiple paths of video streams with different purposes are obtained. According to the embodiment of the application, one image sensor is time-division multiplexed to acquire multiple paths of video streams, hardware equipment does not need to be additionally arranged, and cost can be saved.
Description
Technical Field
The present application relates to the field of video surveillance, and in particular, to a method and an apparatus for generating multiple video streams.
Background
With the development of video monitoring technology, monitoring is developed from simple video recording to intellectualization. The intelligent camera is a development hotspot of the current security industry, and both intelligent vehicle monitoring of roads and checkpoints and face snapshot recognition on common streets depend on the excellent performance of the intelligent camera. With the increase of user demand and the improvement of software and hardware technologies of smart cameras, how to improve the performance of the cameras becomes a research hotspot.
At present, the shooting parameters of the intelligent camera in the face snapshot mode and the license plate snapshot mode have obvious differences: in the face snapshot mode, an exposure time of 10ms or more is required in order to ensure the brightness and definition of the face, while in the license plate snapshot mode, because the vehicle moves faster, the exposure time of not more than 4ms is required in order to ensure the definition of the captured license plate. In order to provide sufficient and necessary information for a subsequent intelligent video analysis system, clear images of a human face and a license plate need to be acquired simultaneously, and the cost is increased if two cameras are used for acquiring the human face image and the license plate image respectively.
Disclosure of Invention
The embodiment of the application provides a method and a device for generating multiple paths of video streams, wherein multiple paths of video streams for different service requirements are acquired by using one image sensor of one camera device, and the hardware cost can be reduced.
In a first aspect, an embodiment of the present application provides a method for generating multiple video streams, where the method is applied to an image capturing device, where the image capturing device includes an image sensor and an image processor, and the method includes: the method comprises the steps that an image sensor acquires an image sequence, wherein the image sequence is formed by alternately arranging images acquired by utilizing a plurality of groups of shooting parameters, and the shooting parameters comprise at least one of exposure time and image gain; then, the image processor generates a plurality of video streams according to the image sequence, wherein the image of each video stream in the plurality of video streams corresponds to a set of shooting parameters. In the method for generating the multiple paths of video streams, multiple paths of video streams meeting different requirements are generated by using one camera device, the implementation mode is simple and convenient, and the hardware cost can be reduced.
The alternative arrangement refers to arranging images acquired by alternately utilizing multiple groups of shooting parameters according to the sequence of the acquired images, and two adjacent images adopt different shooting parameters.
Before performing the above method, the image sensor is configured in a Wide Dynamic Range (WDR) mode of operation. When the image sensor works in a wide dynamic range mode, a plurality of images can be obtained within the duration of a single frame by using different shooting parameters, and the plurality of images are synthesized into a frame of wide dynamic image. Therefore, a plurality of sets of shooting parameters can be configured for the image sensor.
Further, when the image sensor acquires an image sequence, the image sensor alternately acquires images by using multiple groups of shooting parameters, marks the images acquired by using different shooting parameters by using different identifiers, and forms the acquired images into the image sequence. In this way, the shooting parameters used for acquiring the images can be more quickly distinguished, so that the images can be separated from the image sequence and a plurality of video streams can be generated.
Furthermore, when the image processor generates the multi-path video stream according to the image sequence, the image processor creates the multi-path image processing channels, and adds the images into the corresponding image processing channels according to the identifiers of the images in the image sequence to generate the multi-path video stream. Shooting parameters used when the images are collected can be judged by using the identifiers in the images, and the images collected by using the same shooting parameters are combined to generate a video stream, so that the image sequence can be divided into multiple video streams.
Optionally, the multiple video streams may include a long exposure time video stream for face recognition and a short exposure time video stream for license plate recognition.
In a second aspect, an embodiment of the present application provides a multi-channel video stream generating apparatus, where the apparatus has a function of implementing the method described in the first aspect. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
In a third aspect, an embodiment of the present application provides a multi-channel video stream generating device, including: an image sensor and an image processor. The system comprises an image sensor, a data processing unit and a data processing unit, wherein the image sensor is used for acquiring an image sequence, the image sequence is formed by alternately arranging images acquired by utilizing a plurality of groups of shooting parameters, and the shooting parameters comprise at least one of exposure time and image gain; and the image processor is used for generating a plurality of paths of video streams according to the image sequence, wherein the images of each path of video stream in the plurality of paths of video streams correspond to one group of shooting parameters.
In a fourth aspect, an embodiment of the present application provides a multi-channel video stream generating apparatus, including: a processor, a memory, a bus, and a communication interface; the memory is used for storing computer-executable instructions, the processor is connected with the memory through the bus, and when the device runs, the processor executes the computer-executable instructions of the memory, so that the device executes the multi-channel video stream generation method as described in any one of the first aspect.
In a fifth aspect, the present application provides a computer-readable storage medium, which stores instructions that, when executed on a computer, enable the computer to perform the multi-channel video stream generation method of any one of the above first aspects.
In a sixth aspect, embodiments of the present application provide a computer program product containing instructions, which when run on a computer, enable the computer to perform the multi-channel video stream generation method of any one of the above first aspects.
For technical effects brought by any one of the design manners in the second aspect to the sixth aspect, reference may be made to technical effects brought by different design manners in the first aspect, and details are not described here.
The method for generating the multi-channel video stream provided by the embodiment of the application sets the image sensor to be in a WDR mode, configures a plurality of groups of different shooting parameters to generate a sequence including images acquired by adopting the plurality of groups of shooting parameters, and separates the image sequence generated by the WDR sensor into a plurality of groups of image sequences according to the different adopted shooting parameters to generate the multi-channel video stream and respectively processes the multi-channel video stream. According to the method provided by the embodiment of the application, one camera device is used for generating multiple paths of video streams meeting different requirements, the implementation mode is simple and convenient, and the hardware cost can be reduced.
These and other aspects of the present application will be more readily apparent from the following description of the embodiments.
Drawings
Fig. 1 is a schematic structural diagram of an image capturing apparatus according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a multi-channel video stream generation method according to an embodiment of the present application.
Fig. 3A is a schematic diagram of a multi-path video stream generated from a sequence of images.
Fig. 3B is another schematic diagram of generating multiple video streams from a sequence of images.
Fig. 4 is a schematic structural diagram of a multi-channel video stream generating device according to an embodiment of the present application.
Detailed Description
First, some terms in the present application are explained so as to be easily understood by those skilled in the art.
1. The Wide Dynamic Range image sensor refers to an image sensor capable of supporting a Wide Dynamic Range (WDR) operating mode. When the image sensor works in a WDR mode, a plurality of images are acquired and transmitted to the image processing chip to be synthesized into a frame of image with a wide dynamic range, each image corresponds to different shooting parameters, the shooting parameters comprise exposure time and image gain, and the shooting parameters in the WDR mode can be set by a user. The longer the exposure time when the image is collected is, the more accumulated photoelectrons are, and the brighter the collected image is, the image gain refers to the magnification factor of the electric signal when the image is collected, and the brighter the collected image is when the image gain is larger.
2. The single frame time refers to a frame interval of a video stream output by a camera, when a frame rate of the video output by the camera reaches 25 frames per second, the video can be smoothly played for human eyes, at the moment, an image sensor needs to acquire 25 frames of images per second, namely, one frame of image needs to be output every 40 milliseconds, and the single frame time is 40 milliseconds. In some cases, a user may need to acquire 50 images per second, with a single frame time of 20 milliseconds. The single-frame time changes with the change of the frame rate of the output video stream, and different single-frame times can be adopted according to different output video frame rates.
3. The exposure time is a time used for photoelectric conversion when the image sensor collects a certain image.
4. Plural means two or more.
In order to make the purpose, technical solution and technical effect of the present application clearer, the technical solution in the embodiments of the present application will be described below with reference to the accompanying drawings. The particular methods of operation in the method embodiments may also be applied to apparatus embodiments or system embodiments.
Fig. 1 is a schematic structural diagram of an image capturing apparatus 100 according to an embodiment of the present disclosure. For ease of understanding, features of the image pickup apparatus 100 that are not relevant to the present application are not described again. The image pickup apparatus 100 includes a lens 110 as a front end part of the image pickup apparatus 100, and the lens 110 is of a fixed aperture type, an auto zoom type, or the like; an image sensor 120, such as a Complementary Metal Oxide Semiconductor (CMOS), a Charge-coupled Device (CCD), or the like, for recording incident light; an image processor 130; a processor 140 for performing computing operations and controlling the device; a memory 150 for storing programs or data; a communication bus 160 for communicating information between the various components, and a communication interface 170 for communicating information over a communication network to other nodes connected to the network.
The image sensor 120 receives and records information about the light and processes the information by means of an a/D converter and a signal processor 131, wherein the a/D converter and the signal processor 131 are well known to the skilled person. In some embodiments, such as when the image sensor 120 is a CMOS sensor, the image sensor 120 includes an a/D converter, and thus no a/D converter is required in the image processor 130. The result produced by the a/D converter and signal processor 331 is digital image data, which according to one embodiment is processed in a scaling unit 132 and an image encoder 133 before being sent to the processor 140. The scaling unit 132 is used to process the digital image data into at least one image of a specific size. However, the scaling unit 132 may be arranged to generate a plurality of differently sized images, all representing the same image/frame provided by the a/D converter and the signal processor 131. According to another embodiment, the function of the scaling unit 132 is performed by the image encoder 133, in yet another embodiment, no scaling or resizing needs to be performed on the image from the image sensor 120. The image sensor in the embodiment of the application can be a WDR sensor, and supports a WDR working mode, under the WDR working mode, the image sensor needs to acquire a plurality of images to be synthesized into one frame of image data, and the image sensor acquires the plurality of images by adopting different shooting parameters. The user can set multiple groups of shooting parameters for the image sensor according to needs so as to acquire multiple images according to different shooting parameters within a single frame time.
The image processor 130 is used to separate the images captured by the image sensor using different shooting parameters into different video streams according to the shooting parameters employed. The encoder 133, which is optional for carrying out the present application, is arranged to encode the digital image data into any one of a number of known formats for a continuous video sequence, for a limited video sequence, for a still image or for an image/video stream. For example, the image information may be encoded as MPEG1, MPEG2, MPEG4, JEPG, mjpeg, bitmap, or the like. The processor 140 may use the unencoded image as input data. In this case, the image data is transferred from the signal processor 131 or from the scaling unit 132 to the processor 140 without passing the image data through the image encoder 133. The uncoded image may be in any uncoded image format, such as BMP, PNG, PPM, PGM, PNM, and PBM, although the processor 140 may also use the encoded data as input data.
In one embodiment of the present application, the image data may be directly transmitted from the signal processor 131 to the processor 140 without passing through the scaling unit 132 or the image encoder 133. In yet another embodiment, the image data may be sent from the scaling unit 132 to the processor 140 without passing through the image encoder 133.
The processor 140 may be a general-purpose Central Processing Unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more ics for controlling the execution of programs in accordance with the present disclosure. The processor 140 may also be implemented using a Field Programmable Gate Array (FPGA) or a DSP. In actual product implementations, some or all of the functions in the image processor 330 may also be integrated on the processor 140.
The memory 150 is used for storing application program code for executing the present invention, and may be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, a Random Access Memory (RAM) or other types of dynamic storage devices that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto. The memory 150 may be self-contained and coupled to the processor 140 by a bus 160. The memory 150 may also be integrated with the processor 140.
Communication bus 160 may include a path that transfers information between components.
The camera 100 may further include a communication interface 170 for transmitting the acquired multiple video streams to a backend device for storage or processing. The communication interface 170 may use any transceiver or the like for communicating with other devices or communication networks, such as ethernet, Radio Access Network (RAN), Wireless Local Area Network (WLAN), etc. It is understood that the camera 100 may also store the acquired multiple video streams in the local memory 150.
The image pickup apparatus shown in fig. 1 may be a video camera, an apparatus with an image sensor and a processor such as a mobile phone, or an apparatus having a structure similar to that shown in fig. 1.
The fast snapshot system and method provided by the embodiments of the present application will be further described with reference to the accompanying drawings.
As shown in fig. 2, in the method for generating a multi-channel video stream based on time division multiplexing according to the embodiment of the present application, a WDR image sensor is used to acquire image data acquired according to different shooting parameters within a single frame duration, and then the image data acquired according to the different shooting parameters are separated to obtain the multi-channel video stream. The method for generating multiple video streams provided by the embodiment of the present application can be applied to the image capturing apparatus shown in fig. 1, and is used to generate multiple video streams according to different requirements of users, and specific steps of the method provided by the embodiment of the present application are described below with reference to fig. 1. The method for generating the multi-channel video stream comprises the following steps:
s210, configuring the image sensor into a wide dynamic range working mode, and configuring multiple groups of shooting parameters for the image sensor.
The method for generating the multi-channel video stream provided by the embodiment of the application utilizes the characteristic that the image sensor works in the WDR mode, so that the image sensor needs to be configured in the WDR working mode, and multiple groups of shooting parameters are configured for the image sensor, so that the image sensor can acquire multiple images according to the multiple groups of shooting parameters within a single frame time. The number of images acquired within a single frame duration is mainly determined by the application scene requirements and the performance of the image sensor, and those skilled in the art can configure appropriate shooting parameters according to the application scene requirements and the performance of the image sensor.
The method in the embodiment of the application multiplexes the single-frame time duration in the WDR working mode in a time-sharing manner, and acquires a plurality of images by using different shooting parameters in the single-frame time duration. It is to be understood that the scene in the background art is only taken as an example here to illustrate how the shooting parameters can be set. In the background technology, different shooting parameters mainly refer to different exposure times in order to obtain clear face images and license plate images. In other possible implementations, two or more sets of shooting parameters may be set as the hardware performance allows without being limited to setting two sets of shooting parameters. The different shooting parameters are not limited to the difference in exposure time, and may be the same exposure time but different image gains, or may be different in both exposure time and image gain.
It should be noted that, in general, a set of shooting parameters in the embodiment of the present application may include an exposure time and an image gain, but if the exposure time or the image gain used in acquiring an image is fixed, only one of the image gain or the exposure time may be included in the set of shooting parameters.
And S220, acquiring images by using multiple groups of shooting parameters by the WDR image sensor in sequence, and generating an image sequence.
When the working mode of the WDR image sensor is set to be the WDR mode, at least two images need to be synthesized when the WDR sensor outputs one frame of image, the at least two images are acquired by adopting different shooting parameters within a single frame time length, the shooting parameters can be set by a user, and the shooting parameters are mutually independent. Each image frame in the image frame sequence output by the WDR image sensor which normally works in the WDR mode is composed of at least two images, but the composition can be abandoned in a software mode, and the acquired images are directly output to obtain an image sequence composed of images acquired by different shooting parameters which are alternately arranged.
Further, the shooting parameters include exposure duration, and as described in the background art, when a camera deployed on a road needs to acquire a clear face image and a license plate image at the same time, shooting needs to be performed with different exposure durations. At this time, two different sets of shooting parameters may be set, and the exposure times of the two sets of shooting parameters are different, for example, the WDR image sensor may be set to alternately use the exposure times of 10ms and 4ms to capture images, and generate an image sequence including two images captured with different exposure times.
Further, the image sensor may have different data marks when generating images corresponding to different shooting parameters, such as "0" for an image captured with a long exposure and "1" for an image captured with a short exposure. It is to be understood that the image sequence data marking manner acquired with different shooting parameters is not fixed.
And S230, separating the image sequence by the image processor according to different shooting parameters for acquiring the images to generate a plurality of paths of video streams.
When the image sensor works in a WDR mode, the image sensor acquires an image sequence consisting of images acquired by adopting different shooting parameters and transmits the image sequence to the image processor. Generally, after receiving an image sequence, an image processor performs wide dynamic synthesis on the image, that is, a plurality of images are synthesized into one wide dynamic image frame. In the scheme, however, the image processor abandons the synthesis of the wide dynamic image frame, and classifies the images in the image sequence according to different adopted shooting parameters and respectively forms video streams so as to generate at least two video streams.
Further, the image processor may read the image sequence based on a data transmission protocol of the image sensor itself, such as LVDS protocol, MIPI protocol, etc.
Further, when separating the Image sequence and generating the video stream, the Image capturing apparatus creates a plurality of Image Processing (ISP) channels, and sends the images to different ISP channels according to different shooting parameters adopted by the captured images. Multiple ISP channels are used to process images acquired with different shooting parameters, and each ISP channel can process images acquired according to a set of shooting parameters. It is understood that the creation of multiple ISP channels may be performed by multiple processing chips at the hardware level, or may be performed by a single processing chip at the software level in a time division multiplexing or memory division multiplexing manner.
Optionally, after the image processor sends the images acquired by using different shooting parameters to different ISP channels, the image processor may use different image processing parameters to respectively pre-process the images in each ISP channel, where the pre-processing of the images includes optimization processing modes for improving image sharpness, such as white balance, image sharpening, noise reduction, or image enhancement.
Depending on the shooting parameters, the video stream generated by the image processor may be larger than two, for example, three image sequences may be output: the image gain setting method comprises the steps of setting a long exposure time image sequence, a medium exposure time image sequence and a short exposure time image sequence, and setting different image gains for shooting parameters with different exposure times.
In step S220, the WDR image sensor generates an image sequence in which images acquired by using different shooting parameters are alternately arranged, and in step S230, the image processor separates the image sequences according to the difference of the shooting parameters of the images and synthesizes the images acquired by using the same set of shooting parameters into one video stream, so that the image processor separates the image sequence generated by the WDR image sensor to generate at least two video streams.
And S240, respectively carrying out intelligent analysis processing on the multiple paths of video streams.
After the multi-channel video stream is generated, the multi-channel video stream is respectively subjected to intelligent analysis processing according to needs, typically, human face/license plate recognition can be respectively carried out, and other operations such as feature extraction, vehicle violation detection or extraction of structural information in an image can be carried out.
As shown in fig. 3A, taking a scene in the background art as an example, in order to obtain a clear face image and a clear license plate image at the same time, different shooting parameters are required to obtain the face image and the license plate image. Therefore, each single frame time is divided into a long exposure time for collecting clear face images and a short exposure time for collecting clear license plate images, clear face images and license plate images are collected simultaneously in the single frame time and are alternately arranged to form an image sequence. After the image sequence is obtained, long exposure images in the image sequence are separated to form a face snapshot video stream, short exposure images are separated to form a license plate snapshot video stream, and the image sequence formed by alternately arranging the long exposure images and the short exposure images is separated into two paths of video streams for subsequent intelligent video analysis.
As shown in fig. 3B, two sets of shooting parameters with the same exposure time but different image gains may be used to acquire images within each single frame duration, and the acquired images are alternately arranged to form an image sequence. After the image sequence is obtained, the images with high image gain and the images with low image gain in the image sequence are separated to respectively form video streams, namely, the image sequence formed by alternately arranging the images with high image gain and the images with low image gain is separated into two video streams for subsequent storage and processing.
The multi-video stream generation method provided by the embodiment of the application enables the WDR image sensor to generate an image sequence consisting of images shot by adopting multiple groups of shooting parameters by setting multiple groups of shooting parameters for the WDR image sensor, and the image processor separates the image sequence generated by the WDR into multiple groups of image sequences according to different shooting parameters and respectively synthesizes the image sequences into video streams. According to the method provided by the embodiment of the application, one camera is used for generating multiple paths of video streams meeting different requirements, the implementation mode is simple and convenient, the hardware cost can be reduced, and the waste of hardware resources is avoided.
The method embodiments disclosed in the embodiments of the present application can be implemented by software, hardware, or a combination of hardware and computer software. It will be appreciated that the generation of multiple video stream data to implement the above method may comprise corresponding hardware structures and/or software modules for performing the various steps. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the function modules of the multi-channel video stream generating device may be divided according to the method example, for example, each function module may be divided corresponding to each function, or two or more functions may be integrated into one processing module, and the integrated module may be implemented in a form of hardware or a form of software function module. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
For example, in the case of dividing each functional module by corresponding functions, fig. 4 shows a schematic diagram of a possible structure of the multiple video stream generating apparatus 400. The apparatus includes an image acquisition module 410 and an image separation module 420. The image acquisition module 410 is configured to acquire an image sequence, wherein the image sequence is composed of images acquired by using a plurality of sets of shooting parameters, and the shooting parameters include at least one of exposure time and image gain. And the image separation module 420 is configured to generate multiple video streams according to the image sequence acquired by the image acquisition module, where an image of each video stream in the multiple video streams corresponds to a set of shooting parameters.
Optionally, the apparatus 400 further comprises a parameter control module 430 configured to configure the image acquisition module 410 in the WDR mode of operation and to configure the image acquisition module 410 with multiple sets of shooting parameters before acquiring the image sequence.
As an implementation manner, the image obtaining module 410 is configured to sequentially acquire images by using multiple sets of shooting parameters, label corresponding identifiers for the acquired images according to different shooting parameters used for acquiring the images, and then make the acquired images into an image sequence.
In one implementation, the image separation module 420 is configured to create multiple ISP channels, and add images to corresponding ISP channels according to identifiers of the images in the image sequence to generate multiple video streams.
Optionally, the multiple video streams include a long exposure time video stream for face recognition and a short exposure time video stream for license plate recognition.
Optionally, the apparatus 400 further includes an ISP processing module 440, configured to perform preprocessing on the image. The preprocessing comprises optimization processing modes for improving the image definition such as white balance, image sharpening, noise reduction and the like. It should be noted that the ISP processing module 440 can process images in different video streams differently.
Optionally, the apparatus 400 further includes an intelligent analysis module 450, configured to perform intelligent analysis processing on the multiple video streams. The intelligent analysis module can perform various intelligent analysis processing on the video stream, typically face/license plate recognition, and other operations such as feature extraction, vehicle violation detection or extraction of structural information in the image. It should be noted that the intelligent analysis module 450 may perform different intelligent analysis processes on different video streams, and the intelligent analysis module 450 may also perform intelligent analysis processes on video streams that are not preprocessed by the ISP processing module 440.
All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
In the present embodiment, the multi-path video stream generating apparatus is presented in a form corresponding to each functional module, or the apparatus is presented in a form in which each functional module is divided in an integrated manner. As used herein, a "module" may refer to an ASIC circuit, a processor and memory that execute one or more software or firmware programs, an integrated logic circuit, and/or other components that provide the described functionality. In a simple embodiment, those skilled in the art will recognize that the multi-path video stream generating apparatus 400 may take the form shown in fig. 1. For example, the image obtaining module 410 in fig. 4 may be implemented by the lens 110 and the image sensor 120 in fig. 1, the image separation module 420, the parameter control module 430, and the ISP processing module may be executed by the image processor 130 calling the application program code stored in the memory 150, and the intelligent analysis module 450 may be executed by the processor 140 calling the application program code stored in the memory 150, which is not limited in this embodiment.
Since the multi-channel video stream generating device provided in the embodiment of the present application can be used to execute the method for generating the multi-channel video stream, the technical effects obtained by the multi-channel video stream generating device can refer to the method embodiment, and are not described herein again.
Embodiments of the present application further provide a computer-readable storage medium for storing computer software instructions for the multi-channel video stream generating apparatus and/or the image capturing apparatus, which includes program codes designed to execute the above method embodiments. By executing the stored program codes, the WDR image sensor can generate a sequence comprising images acquired by adopting different shooting parameters, and further acquire a plurality of paths of video streams.
The embodiment of the application also provides a computer program product. The computer program product comprises computer software instructions which can be loaded by a processor for implementing the method in the above-described method embodiments.
While the present application has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, apparatus (system), or computer program product. Accordingly, this application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "module" or "system. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. A computer program stored/distributed on a suitable medium supplied together with or as part of other hardware, may also take other distributed forms, such as via the Internet or other wired or wireless telecommunication systems.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the present application has been described in conjunction with specific features and embodiments thereof, it will be evident that various modifications and combinations can be made thereto without departing from the spirit and scope of the application. Accordingly, the specification and figures are merely exemplary of the present application as defined in the appended claims and are intended to cover any and all modifications, variations, combinations, or equivalents within the scope of the present application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
Claims (15)
1. A method for generating multiple video streams, the method being used in a camera device comprising an image sensor and an image processor, the method comprising:
the image sensor acquires an image sequence, wherein the image sequence is formed by alternately arranging images acquired by utilizing a plurality of groups of shooting parameters, and the shooting parameters comprise at least one of exposure time and image gain;
and the image processor generates a plurality of video streams according to the image sequence, wherein the image of each video stream in the plurality of video streams corresponds to a group of shooting parameters.
2. The method of claim 1, wherein prior to an image sensor acquiring a sequence of images, the image sensor is configured in a Wide Dynamic Range (WDR) mode of operation, the camera device acquiring the plurality of sets of shooting parameters.
3. The method of claim 1, wherein the image sensor acquiring the sequence of images comprises the image sensor alternately acquiring images using the plurality of sets of shooting parameters, labeling the images acquired using different shooting parameters with different identifiers, and assembling the acquired images into the sequence of images.
4. The method of claim 3, wherein the image processor generating multiple video streams from the sequence of images comprises:
the image processor creates a plurality of image processing channels;
and the image processor adds the images into the corresponding image processing channels according to the identifiers of the images in the image sequence to generate a plurality of paths of video streams.
5. The method of any one of claims 1 to 4, wherein the multi-path video stream comprises a long exposure time duration video stream for face recognition and a short exposure time duration video stream for license plate recognition.
6. An apparatus for generating multiple video streams, the apparatus comprising:
an image acquisition module for acquiring an image sequence composed of images acquired by using a plurality of sets of shooting parameters, which include at least one of exposure time and image gain, arranged alternately;
and the image separation module is used for generating a plurality of paths of video streams according to the image sequence, wherein the image of each path of video stream in the plurality of paths of video streams corresponds to a group of shooting parameters.
7. The apparatus of claim 6, further comprising a parameter control module to configure the image acquisition module in a WDR mode of operation and to configure the plurality of sets of shooting parameters for the image acquisition module.
8. The apparatus of claim 6, wherein the image acquisition module is further configured to alternately acquire images using the plurality of sets of shooting parameters, label the images acquired using different shooting parameters with different identifiers, and group the acquired images into an image sequence.
9. The apparatus of claim 8, wherein the image separation module is further configured to create multiple image processing channels, and add images to corresponding image processing channels according to identifiers of images in the image sequence to generate multiple video streams.
10. The apparatus according to any one of claims 6 to 9, wherein the multi-path video stream comprises a long exposure time duration video stream for face recognition and a short exposure time duration video stream for license plate recognition.
11. An apparatus for generating multiple video streams, the apparatus comprising:
an image sensor for acquiring an image sequence consisting of an alternating arrangement of images acquired with a plurality of sets of shooting parameters, the shooting parameters including at least one of exposure time and image gain;
and the image processor is used for generating a plurality of video streams according to the image sequence, wherein the image of each video stream in the plurality of video streams corresponds to a group of shooting parameters.
12. The apparatus of claim 11, wherein the image processor is further to configure the image acquisition module in a WDR mode of operation and the plurality of sets of shooting parameters for the image sensor.
13. The apparatus of claim 11, wherein the image sensor is further configured to alternately acquire images using the plurality of sets of capture parameters, label the images acquired using different capture parameters with different identifiers, and assemble the acquired images into a sequence of images.
14. The apparatus of claim 13, wherein the image processor is further configured to create multiple image processing channels, and to add images to corresponding image processing channels based on the identifiers of the images in the sequence of images to generate multiple video streams.
15. The apparatus according to any one of claims 11 to 14, wherein the multi-path video stream comprises a long exposure time duration video stream for face recognition and a short exposure time duration video stream for license plate recognition.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811404859.3A CN111225126A (en) | 2018-11-23 | 2018-11-23 | Multi-channel video stream generation method and device |
PCT/CN2019/119166 WO2020103786A1 (en) | 2018-11-23 | 2019-11-18 | Method for generating multiple video streams and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811404859.3A CN111225126A (en) | 2018-11-23 | 2018-11-23 | Multi-channel video stream generation method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111225126A true CN111225126A (en) | 2020-06-02 |
Family
ID=70773230
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811404859.3A Pending CN111225126A (en) | 2018-11-23 | 2018-11-23 | Multi-channel video stream generation method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111225126A (en) |
WO (1) | WO2020103786A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111726530A (en) * | 2020-06-28 | 2020-09-29 | 杭州海康威视数字技术股份有限公司 | Method, device and equipment for acquiring multiple paths of video streams |
CN112364732A (en) * | 2020-10-29 | 2021-02-12 | 浙江大华技术股份有限公司 | Image processing method and apparatus, storage medium, and electronic apparatus |
CN114710626A (en) * | 2022-03-07 | 2022-07-05 | 北京千方科技股份有限公司 | Image acquisition method, image acquisition device, electronic equipment and medium |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113834977B (en) * | 2020-06-23 | 2024-08-16 | 广州汽车集团股份有限公司 | Multi-path LVDS signal test verification system and method |
CN115334235B (en) * | 2022-07-01 | 2024-06-04 | 西安诺瓦星云科技股份有限公司 | Video processing method, device, terminal equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2630786A1 (en) * | 2010-10-24 | 2013-08-28 | Opera Imaging B.V. | System and method for imaging using multi aperture camera |
CN103975578A (en) * | 2011-12-08 | 2014-08-06 | 索尼公司 | Image processing device, image processing method, and program |
CN104243834A (en) * | 2013-06-08 | 2014-12-24 | 杭州海康威视数字技术股份有限公司 | Image streaming dividing control method and device of high-definition camera |
CN104780324A (en) * | 2015-04-22 | 2015-07-15 | 努比亚技术有限公司 | Shooting method and device |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8115858B2 (en) * | 2008-01-16 | 2012-02-14 | Samsung Electronics Co., Ltd. | System and method for acquiring moving images |
CN101764959A (en) * | 2008-12-25 | 2010-06-30 | 昆山锐芯微电子有限公司 | Image pickup system and image processing method |
JP5655667B2 (en) * | 2011-03-31 | 2015-01-21 | カシオ計算機株式会社 | Imaging apparatus, imaging control method, image processing apparatus, image processing method, and program |
CN105141833B (en) * | 2015-07-20 | 2018-12-07 | 努比亚技术有限公司 | Terminal image pickup method and device |
CN108353130A (en) * | 2015-11-24 | 2018-07-31 | 索尼公司 | Image processor, image processing method and program |
CN106254782A (en) * | 2016-09-28 | 2016-12-21 | 北京旷视科技有限公司 | Image processing method and device and camera |
-
2018
- 2018-11-23 CN CN201811404859.3A patent/CN111225126A/en active Pending
-
2019
- 2019-11-18 WO PCT/CN2019/119166 patent/WO2020103786A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2630786A1 (en) * | 2010-10-24 | 2013-08-28 | Opera Imaging B.V. | System and method for imaging using multi aperture camera |
CN103975578A (en) * | 2011-12-08 | 2014-08-06 | 索尼公司 | Image processing device, image processing method, and program |
CN104243834A (en) * | 2013-06-08 | 2014-12-24 | 杭州海康威视数字技术股份有限公司 | Image streaming dividing control method and device of high-definition camera |
CN104780324A (en) * | 2015-04-22 | 2015-07-15 | 努比亚技术有限公司 | Shooting method and device |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111726530A (en) * | 2020-06-28 | 2020-09-29 | 杭州海康威视数字技术股份有限公司 | Method, device and equipment for acquiring multiple paths of video streams |
CN115297257A (en) * | 2020-06-28 | 2022-11-04 | 杭州海康威视数字技术股份有限公司 | Method, device and equipment for acquiring multiple paths of video streams |
CN115297257B (en) * | 2020-06-28 | 2024-02-02 | 杭州海康威视数字技术股份有限公司 | Method, device and equipment for acquiring multiple paths of video streams |
CN112364732A (en) * | 2020-10-29 | 2021-02-12 | 浙江大华技术股份有限公司 | Image processing method and apparatus, storage medium, and electronic apparatus |
CN114710626A (en) * | 2022-03-07 | 2022-07-05 | 北京千方科技股份有限公司 | Image acquisition method, image acquisition device, electronic equipment and medium |
CN114710626B (en) * | 2022-03-07 | 2024-05-14 | 北京千方科技股份有限公司 | Image acquisition method, device, electronic equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
WO2020103786A1 (en) | 2020-05-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111225126A (en) | Multi-channel video stream generation method and device | |
US8416303B2 (en) | Imaging apparatus and imaging method | |
US10958820B2 (en) | Intelligent interface for interchangeable sensors | |
US8605185B2 (en) | Capture of video with motion-speed determination and variable capture rate | |
US8305448B2 (en) | Selective privacy protection for imaged matter | |
CN103260037B (en) | For sending the equipment of two field picture and the method for camera | |
CN111836102B (en) | Video frame analysis method and device | |
CN104243834B (en) | The image flow-dividing control method and its device of high definition camera | |
CN103260026A (en) | Apparatus and method for shooting moving picture in camera device | |
KR101730342B1 (en) | Image processing apparatus, image processing method, and storage medium | |
KR101747214B1 (en) | Muliti-channel image analyzing method and system | |
CN111225153A (en) | Image data processing method, image data processing device and mobile terminal | |
CN109669783B (en) | Data processing method and device | |
CN114125400A (en) | Multi-channel video analysis method and device | |
JP2010212781A (en) | Image processing apparatus and image processing program | |
KR101932539B1 (en) | Method for recording moving-image data, and photographing apparatus adopting the method | |
KR101603213B1 (en) | Method for correcting handshaking and digital photographing apparatus adopting the method | |
WO2022078036A1 (en) | Camera and control method therefor | |
CN110475044B (en) | Image transmission method and device, electronic equipment and computer readable storage medium | |
CN115668274A (en) | Computer software module arrangement, circuit arrangement, arrangement and method for improved image processing | |
JP6590681B2 (en) | Image processing apparatus, image processing method, and program | |
EP2495972A1 (en) | Monitoring device and method for monitoring a location | |
KR102550117B1 (en) | Method and System for Video Encoding Based on Object Detection Tracking | |
CN110049239B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
JP2005303535A (en) | Image data communicating method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200602 |
|
RJ01 | Rejection of invention patent application after publication |