CN116347020A - Video camera image transmission method and device and video camera - Google Patents

Video camera image transmission method and device and video camera Download PDF

Info

Publication number
CN116347020A
CN116347020A CN202111645121.8A CN202111645121A CN116347020A CN 116347020 A CN116347020 A CN 116347020A CN 202111645121 A CN202111645121 A CN 202111645121A CN 116347020 A CN116347020 A CN 116347020A
Authority
CN
China
Prior art keywords
partial
image
full
images
graphs
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111645121.8A
Other languages
Chinese (zh)
Inventor
匡小冬
刘琳
白涛
许俊峰
闫冬升
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to PCT/CN2022/139015 priority Critical patent/WO2023109867A1/en
Publication of CN116347020A publication Critical patent/CN116347020A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/917Television signal processing therefor for bandwidth reduction

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

Provided is a camera image transmission method including: extracting a plurality of full graphs from a video stream shot by a camera, wherein the plurality of full graphs record a plurality of targets; a plurality of partial images are extracted from the plurality of full images, wherein the number of the partial images extracted from at least one full image is a plurality of partial images; and sending the partial images to the storage device, and sending the full images corresponding to the sent partial images to the storage device, wherein a plurality of partial image full images correspond to the same full image.

Description

Video camera image transmission method and device and video camera
Technical Field
The present invention relates to a camera technology, and in particular, to a camera image transmission method.
Background
The camera converts the optical signals into visual images and videos through photoelectric conversion, and the visual images and videos can be recorded on a storage medium so as to be convenient for backtracking, so that the camera is widely applied to the fields needing camera monitoring, such as intersections, airports and the like.
The network camera (IPC) is added with a network coding module and a network port based on the traditional camera. The network coding module performs compression coding on the acquired data, and uses a network port to transmit the data through a network, and a remote storage server stores the coded data after receiving the coded data. For data that may be either viewed or processed on a storage server.
Taking a snapshot network camera installed at an intersection as an example, it frequently takes a snapshot of a vehicle passing through the shooting range of the network camera. The captured image is called a full image, and a partial image (e.g., a partial image of a vehicle) is scratched from the full image. If a plurality of images in the video stream shot by the same camera all shoot the same vehicle, selecting a partial image with the highest recognition degree of the vehicle and a full image corresponding to the partial image for transmission, and transmitting the rest images.
For a busy intersection, after long-time shooting, a network camera generates a large number of partial images and full images that need to be transmitted. On urban road network, there are a large number of network cameras working in parallel, so that a large number of images can be generated. Transmitting these images to the storage server clearly presents a significant challenge to both the network transmission capabilities and the storage capacity of the storage server.
Therefore, how to minimize the number of images to be transmitted is a current urgent problem to be solved.
Disclosure of Invention
In a first aspect, there is provided a camera image transmission method, including: extracting a plurality of full graphs recorded with a plurality of targets from a video stream, wherein each full graph records at least one target; obtaining a first group of partial images according to the plurality of full images, wherein the first group of partial images are a plurality of partial images, different targets are recorded in different partial images, and at least two partial images in the first group of partial images are obtained according to the same full image in the plurality of full images; and transmitting the first group of partial graphs and the full graphs corresponding to the partial graphs in the first group of partial graphs to a storage device, wherein the number of the transmitted full graphs is smaller than the number of the partial graphs.
With the solution provided in the first aspect, the partial images collectively correspond to the same full image, so that the number of full images sent by the camera to the storage device is less than the number of partial images. Thereby reducing the number of transmitted full graphs, reducing the occupation of network bandwidth and the occupation of storage space of the storage device.
In a first possible implementation manner of the first aspect, the first set of partial images includes a first partial image, where the first partial image is used to record a first target object, and the first target of the best image quality is located in a second full image different from the first full image. This scheme can be seen in that we choose the best quality of the partial graph as a criterion, not in terms of the partial graph. Optionally, the image quality of the first partial graph meets a preset quality requirement.
In a second aspect, there is provided a camera image transmission apparatus including: the full-image acquisition module is used for extracting a plurality of full-images recorded with a plurality of targets from the video stream, wherein each full-image is recorded with at least one target; the local image acquisition module is used for acquiring a first group of local images according to the plurality of full images, wherein different local images record different targets, and at least two local images in the first group of local images are acquired according to the same full image in the plurality of full images; and the sending module is used for sending the first group of partial graphs and the full graphs corresponding to the partial graphs in the first group of partial graphs to the storage device, wherein the number of the sent full graphs is smaller than the number of the partial graphs.
In a second aspect, there is provided a camera comprising: the lens is used for receiving light rays; the sensor is used for photoelectric conversion and generating an original image; a processor system for processing an original image, and: extracting a plurality of full graphs recorded with a plurality of targets from a video stream, wherein each full graph records at least one target; obtaining a first group of partial images according to the plurality of full images, wherein the first group of partial images are a plurality of partial images, different targets are recorded in different partial images, and at least two partial images in the first group of partial images are obtained according to the same full image in the plurality of full images; and transmitting the first group of partial graphs and the full graphs corresponding to the partial graphs in the first group of partial graphs to a storage device, wherein the number of the transmitted full graphs is smaller than the number of the partial graphs.
In a fourth aspect, a program product is provided that is executable in a processor of a camera, by which the camera can perform: extracting a plurality of full graphs recorded with a plurality of targets from a video stream, wherein each full graph records at least one target; obtaining a first group of partial images according to the plurality of full images, wherein the first group of partial images are a plurality of partial images, different targets are recorded in different partial images, and at least two partial images in the first group of partial images are obtained according to the same full image in the plurality of full images; and transmitting the first group of partial graphs and the full graphs corresponding to the partial graphs in the first group of partial graphs to a storage device, wherein the number of the transmitted full graphs is smaller than the number of the partial graphs.
Drawings
Fig. 1 is a block diagram of an embodiment of the components of a camera.
Fig. 2 is a flow chart of the camera sending an image.
Fig. 3 is a schematic diagram of an embodiment in which a camera transmits an image.
Fig. 4 is a flow chart of the camera sending an image.
Fig. 5 is a schematic diagram of an embodiment in which a camera transmits an image.
Fig. 6 is a schematic diagram of an embodiment in which the camera is an image transmission device.
Detailed Description
A video camera is an electronic device that may be used to take pictures and record video. Referring to fig. 1, the components constituting the camera of the embodiment of the present invention include a lens 11 (lens), a sensor 12, and a system on chip (SoC) 13. The lens 11 is used for passing light; the sensor 12 is used for converting optical signals from the lens into electric signals and recording the electric signals in a mode of original images; the system on chip 13 may perform encoding processing on the original image, for example, encode a RAW image into a JPEG image, a HEIF image, an h.264 video, an h.265 video, or the like that is easily recognized by the human eye. The image related in the embodiment of the invention can be a photo or a frame in a video stream.
The lens 11 functions to present an optical image of the object under observation on the sensor of the camera. The lens combines optical parts (reflecting mirror, transmitting mirror and prism) with different shapes and different media (plastic, glass or crystal) in a certain mode, so that after light rays are transmitted or reflected by the optical parts, the transmission direction of the light rays is changed according to the needs of people and the light rays are received by a receiving device, and the optical imaging process of an object is completed. Generally, the lens is formed by combining a plurality of groups of lenses with different curved surface curvatures according to different intervals. The choice of the pitch, lens curvature, light transmittance, etc. determines the focal length of the lens. The main parameter indexes of the lens comprise: the effective focal length, aperture, angle of view, distortion, relative illuminance, etc., and each index value determines the overall performance of the lens.
The sensor 12 is a device for converting an optical image into an electronic signal, and is widely used in cameras and other electronic optical devices. Common sensors include: a charge-coupled device (CCD) and a Complementary Metal Oxide Semiconductor (CMOS). Both CCDs and CMOS have a large number (e.g., tens of millions) of photodiodes (photodiodes), each called a photo cell, one pixel for each photo cell. When exposed, the light-sensitive diode converts the light signal into an electric signal containing brightness (or brightness and color) after receiving the light irradiation, and then the image is reconstructed. Bayer (Bayer) arrays are a common image sensor technology, which uses Bayer color filters to sensitize different pixels to one of the three primary colors red, blue, and green, which are interlaced together, and then a demosaicing (demosaicing) interpolation algorithm is used to obtain the original image. The bayer array may be applied to a CCD or CMOS, and a sensor to which the bayer array is applied is also called a bayer sensor.
In some cases, the same camera may be provided with a plurality of sensors, such as a dual-lens dual-sensor camera, and a single-lens dual-sensor camera, a single-lens three-sensor camera, which are used to image the same subject. When the number of the lenses is smaller than that of the sensors, a beam splitter can be arranged between the lenses and the sensors so as to split the light entering from one lens onto a plurality of sensors, and each sensor can receive the light.
A processor system, in particular a combination of a plurality of separate chips, or an integration of a plurality of chips (e.g. system on a chip 13). An Image Signal Processor (ISP) 131 is used for performing image signal processing on the image, and improving contrast, saturation, gain, and other image parameters of the image to adjust. The image signal processor 132 may fuse the images, for example, a frame of gray-scale image and a frame of color image, and fuse the images to generate a frame of high-quality color image, thereby improving the image effect. The encoder 133 is used to encode the image into an image for playback. For example, the YUV data is generated into a frame of h.264 or h.265. The image can be played by a display or stored. And a plurality of images can be continuously played to form a dynamic video. In some embodiments, the chips may not be integrated together, and may be independent of each other.
In addition, the camera may have other components, such as filters, network interfaces, encoders, etc., which are not described in detail herein.
After capturing the image, the camera needs to send the full map and the partial map of the area where the target is located to the storage server. One approach is described in more detail below in conjunction with fig. 2 and flow chart 3.
In step S11, the video camera captures a video stream, and the video camera extracts a plurality of frames from the captured video stream.
The camera sequentially shoots frames comprising: image 21 (full view), image 22 (full view), and image 23 (full view). The objects recorded in the image 21 include: vehicle a, vehicle B, and vehicle C; the objects recorded in image 22 include: vehicle B and vehicle C, the vehicles recorded in image 23 include vehicle C, and the vehicles are not recorded in image 24. Of these, the vehicle B has the highest quality in the image 22 and the vehicle C has the highest quality in the image 23.
In step S12, the camera extracts partial images of the vehicle from the full image according to the image quality of the target, and obtains an image 211 (partial image), an image 212 (partial image), and an image 231 (partial image).
In step S13, the camera sends all partial images and the full images where all partial images are located to the storage server, where the number of partial images in fig. 1 is 3, and the corresponding full image number is also 3. A total of 6 images are therefore required to be sent to the storage server, these 6 images being: image 211, image 221, image 231, image 21, image 22, and image 23. After the storage server receives the image sent by the camera, the storage server can buffer or durably store the received image. Optionally, the storage server also has functions of image optimization, image recognition and the like.
In this scheme, since a plurality of partial images are derived from the same full image, the full image still needs to be transmitted separately for each partial image, and thus the same full image may be repeatedly transmitted.
In the above embodiment, a single camera needs to send 6 images (3 full-view+3 partial-view) to the storage server based on a small video stream. When a plurality of cameras take a photograph for a long time, the total amount of images to be transmitted will be very large. Thus, the present invention further provides an embodiment in which the camera transmits images, after capturing the images, when the same image appears in multiple vehicles, only partial images of those vehicles and the list Zhang Quantu can be transmitted without transmitting a full image for each partial image. Corresponding to a plurality of partial images "sharing" the same full image. Thus, the number of the partial graphs and the number of the full graphs are not kept in a 1:1 relation any more, but the number of the full graphs is smaller than the number of the partial graphs, and the sending number of the full graphs is reduced.
The following flow chart describes this embodiment in detail with reference to fig. 4 and 5.
In step S21, the video camera captures a video stream in real time, and stores the video stream in a local memory (memory, memory card, SSD, or hard disk). The camera extracts images (also called frames) from the captured video stream and the complete image extracted in this step is called a full image in order to distinguish from the partial image. The extracted full-images are in the order of far and near in time of photographing, fig. 21, 22 and 23.
And S22, the region where the target is located is extracted from the plurality of full images (large images) to generate a partial image (small image), and the partial image is put into a cache queue. The object is an object to be observed or analyzed in a frame, for example, a moving object such as a car, a pedestrian, an animal, or the like.
The camera extracts one image from two images at intervals in the video, obtains the plurality of images, respectively detects the image quality of the target (specifically, the region where the target is located in the image, for convenience of description, simply referred to as the target herein) in the extracted images, and performs the following operations on the same target appearing in different images:
(1) And carrying out partial image matting on the target with the highest quality to obtain the best quality partial image, and storing the best quality partial image into the best cache queue. Therefore, when the target is shot for the first time, the local map is directly scratched from the frame and stored into the optimal cache queue. For targets that are not first shot, if the quality of the target in the later shot frame is higher than the quality of the target in the earlier shot frame, then the partial image of the target is scratched from the later shot frame, covering the partial image already in the best queue.
Each target may have an optimal cache queue for storing optimal local maps. The full graph of the best quality partial graph (best full graph) is also placed in the best cache queue, or in a separately provided full graph queue.
The frame number of the best full graph (or other indicia that can distinguish between different full graphs) is placed in a frame number queue.
(2) For images in a large graph, if the quality of the object is acceptable, but if the object is not the highest quality object, then the partial graph is scratched and stored in a suboptimal queue. Therefore, if the quality of the partial graph that is scratched later is lower than that of the partial graph that is scratched earlier and the quality is qualified, the partial graph that is scratched later is put into a suboptimal queue.
The quality of the judgment target can be from multiple dimensions, such as one of the following: the gesture of the target in the image, the image resolution of the region where the target is located, the target key point display rate or the recognition rate when the target is recognized, and the like. When the target quality does not reach the expected quality, the sub-optimal cache queue is not placed.
According to the arrangement sequence of the images in the video stream, the images extracted from the video stream are respectively an image 21, an image 22, an image 23 and an image 24, and as can be seen from the 4 images, the vehicle A, the vehicle B and the vehicle C sequentially drive out of the shooting range of the camera along with the time.
The image quality of each vehicle in different images is different, and the quality score (the quality threshold is 50 points, and the task quality is less than 50 points and does not reach the expectations) describing the quality is in brackets. The distribution is shown in Table 1 below, with the targets depicted in the partial graph and the scores of the targets within brackets.
Figure RE-GDA0003564103390000041
TABLE 1
For image 21: if no record is made about the vehicle a, the vehicle B and the vehicle C in the optimal cache line at this time, the image 211 (partial image), the image 212 (partial image), the image 213 (partial image) are directly scratched without considering the quality of the object, and all of them are put in the respective optimal cache lines of the vehicles. Each vehicle has a corresponding best cache queue and sub-best cache queue.
For image 22: image 221 is of higher quality than image 212, so image 221 is placed in the vehicle B best buffer queue, covering image 212. The quality score of the image 222 is 40 points and thus no matting is performed.
For image 23: the image 231 (partial image) is scratched, and since the quality of the image 231 is higher than that of the image 213, the image 231 is put into the best buffer queue of the vehicle C, and the image 213 is overlaid.
For image 24: since no object (vehicle) is recorded, no image matting is done.
Table 2 is a description of images in a cache line for each vehicle.
Figure RE-GDA0003564103390000051
TABLE 2
Step S23, selecting the partial graph and the full graph providing the partial graph according to the partial graph stored in the buffer queue, sending the partial graph and the full graph providing the partial graph to the storage device, and recording the sent full graph by using the frame number queue. This step may be performed after detecting the disappearance of the vehicle.
The strategy for selecting the partial graph is: for any one of the vehicles, if it is determined that the full map in which the vehicle is recorded has been transmitted once by the recording in the frame number queue, and in the full map that has been transmitted once, the quality of the vehicle satisfies a quality threshold (the quality threshold is judged to be an optional condition). Then this partial graph meeting the quality threshold is looked up from the next best queue for transmission. For any vehicle, no additional full map may be sent.
The following describes the images transmitted by the camera at different points in time based on the four images referred to in table 1, in combination with table 2.
(i) When the camera extracts the image 22 from the video stream, it is detected that the vehicle a has disappeared with respect to the front image (the image 21 is the front image of the image 22, the image 22 is the rear image of the image 21). Therefore, the vehicle a disappears as a trigger condition, and at this time, an image related to the vehicle a is selected and transmitted.
As can be seen, for vehicle a, there is an image 211 in the optimal buffer queue, and the full view corresponding to image 211 is image 21. From the full image frame number queue (when the frame number queue is blank), the camera has not previously sent the full image of vehicle a recorded, and therefore directly selects image 211 and image 21 to send to the storage device. And, the frame number of the selected full picture (i.e., image 21) is entered into the full picture frame number queue. Further, the correspondence of the image 211 and the image 21 is also transmitted to the storage device.
(ii) When the camera extracts the image 23 from the video stream, it is detected that the vehicle B is disappeared with respect to the previous image (image 22), and therefore it is necessary to transmit the image related to the vehicle B.
For vehicle B, it is known from the full frame number queue that image 21 has been transmitted. There is a partial view of the vehicle B in the image 21 (image 212), and the mass fraction of the vehicle B is higher than 50 in the image 21. Although the mass fraction of vehicle B in image 22 is lower than that of vehicle B in image 21, image 212 is selected directly from the suboptimal buffer for transmission to the storage device (instead of transmitting image 221 and the full image 22 in which image 221 resides) and the correspondence between image 212 and image 21 is transmitted to the storage device, for "multiplexing" the full image. That is, the 2 partial images (image 212 and image 211) collectively correspond to the same full image (image 21).
Compared with the procedure described in steps S11-S13, there are the following differences: since the mass of the vehicle B in fig. 22 is higher than that of the vehicle B in fig. 21. Therefore, it is necessary to transmit the optimal partial map 221 and the image 22 to which the optimal partial map 221 belongs to the vehicle B. It can be seen that this embodiment transmits a full image (image 22) less than the embodiment described in steps S11-S13. Network burden is reduced, and space occupation of the storage device is reduced.
(iii) When the camera extracts the image 24, it is detected that the vehicle C has disappeared with respect to the previous image (image 23), thus triggering the transmission of the image relating to the vehicle C.
For the vehicle C, it is known from the full frame number queue that the image 21 has been transmitted. There is a partial view of the vehicle C in the image 21 (image 212), however, in the image 21, the mass fraction of the vehicle C is lower than 50. And the quality score of the image 231 is higher than the threshold value, so for the vehicle C, the image 231 and the full map 23 corresponding to the image 231 are selected and sent to the storage device. Further, the correspondence of the image 231 and the image 23 is also transmitted to the storage device.
It can be seen that in this embodiment the camera sends a total of 5 images to the storage device (3 partial images +2 full images).
It should be noted that, for the vehicle C, another alternative case is to disregard the image quality. It is known from the full frame number queue that the image 21 has been transmitted, and then the image 213 (and the association relationship between the transmission image 213 and the image 21) is directly transmitted, and the transmission image 231 and the image 23 are not selected. In this case, the camera transmits a total of 4 full images to the storage device (3 partial images+1 full image).
In actual use, for a particular vehicle, step 22 is performed first, and then step 23 is performed. Considering that the image capturing will continuously extract a large number of images from the video stream, the operations of extracting images and transmitting images are actually continuously performed.
In this embodiment, the partial graphs sent to the storage device may be referred to as a first set of partial graphs, and the partial graphs in the cache queue may be referred to as a second set of partial graphs. Some partial graphs in the second group of partial graphs record the same target, and different partial graphs record different targets in the second group of partial graphs. The first set of partial graphs is thus part of the second set of partial graphs, the number of partial graphs in the first set of partial graphs being less than the number of partial graphs in the second set of partial graphs.
The invention also provides a camera image transmitting device which can execute the method. The device is, for example, a camera, or a software module in a camera, which may be run by an Image Signal Processor (ISP) 131.
The image transmission apparatus includes a full-view acquisition module 31, a partial-view acquisition module 32, and a transmission module 33.
The full-view acquisition module 31 may perform step S21, and the partial-view acquisition module 32 is configured to extract, from the video stream, a plurality of full-views recorded with a plurality of objects, where each full-view records at least one object. The local map obtaining module 32 may perform the process of selecting the local map and the full map in step S22 and step S23, where the local map obtaining module 32 is configured to obtain a first set of local maps according to the multiple full maps, where different local maps record different targets, and at least two local maps in the first set of local maps are obtained according to the same full map in the multiple full maps. The sending module 33 may perform the sending process in step S23, configured to send the first set of partial graphs, and the full graphs corresponding to the respective partial graphs in the first set of partial graphs, to a storage device, where the number of the full graphs that are sent is smaller than the number of the plurality of partial graphs.
The sending module 33 is further configured to establish, in the image sent to the storage device, a correspondence between a partial graph in the first set of partial graphs and a full graph having the partial graph.
The local map acquisition module 32 may specifically be configured to: obtaining a second group of partial images according to the plurality of full images, wherein partial images in the second group of partial images record the same target, and the partial images recording the same target come from different full images; and selecting the first group of partial graphs according to the second group of partial graphs, wherein different targets are recorded by different partial graphs in the first group of partial graphs.
The invention also provides a program product which can be run in a video camera for carrying out the aforementioned method steps S21-S23. One or more of the above modules or units may be implemented in software, hardware, or a combination of both. When any of the above modules or units are implemented in software, the software exists in the form of computer program instructions and is stored in a memory, a processor can be used to execute the program instructions and implement the above method flows. The processor may include, but is not limited to, at least one of: a central processing unit (central processing unit, CPU), microprocessor, digital Signal Processor (DSP), microcontroller (microcontroller unit, MCU), or artificial intelligence processor, each of which may include one or more cores for executing software instructions to perform operations or processes. The processor may be built into a SoC (system on a chip) or an application specific integrated circuit (application specific integrated circuit, ASIC) or may be a separate semiconductor chip. The processor may further include necessary hardware accelerators, such as field programmable gate arrays (field programmable gate array, FPGAs), PLDs (programmable logic devices), or logic circuits implementing dedicated logic operations, in addition to the cores for executing software instructions for operation or processing.
Referring to fig. 1, the present invention also provides an embodiment of a camera, including: a lens 11 for receiving light; a sensor 12 for photoelectric conversion to generate an original image; a processor system 13 is also included.
The processor system 13 is used for processing the original image and for performing the method embodiments described above. The method of the processor system 13 includes, for example: extracting a plurality of full graphs recorded with a plurality of targets from a video stream, wherein each full graph records at least one target; obtaining a first group of partial images according to the plurality of full images, wherein the first group of partial images are a plurality of partial images, different targets are recorded in different partial images, and at least two partial images in the first group of partial images are obtained according to the same full image in the plurality of full images; and transmitting the first group of partial graphs and the full graphs corresponding to the partial graphs in the first group of partial graphs to a storage device, wherein the number of the transmitted full graphs is smaller than the number of the partial graphs.
When the above modules or units are implemented in hardware, the hardware may be any one or any combination of a CPU, microprocessor, DSP, MCU, artificial intelligence processor, ASIC, soC, FPGA, PLD, dedicated digital circuitry, hardware accelerator, or non-integrated discrete device that may run the necessary software or that is independent of the software to perform the above method flows.

Claims (14)

1. A camera image transmission method comprising:
extracting a plurality of full graphs recorded with a plurality of targets from a video stream, wherein each full graph records at least one target;
obtaining a first group of partial images according to the plurality of full images, wherein the first group of partial images comprises a plurality of partial images, different targets are recorded in different partial images, and at least two partial images in the first group of partial images are obtained according to the same full image in the plurality of full images;
and transmitting the first group of partial graphs and the full graphs corresponding to the partial graphs in the first group of partial graphs to a storage device, wherein the number of the transmitted full graphs is smaller than the number of the partial graphs.
2. The method according to claim 1, characterized in that:
the first set of partial graphs includes a first partial graph and a second partial graph, and the transmitted full graph includes a first full graph, wherein the first partial graph and the second partial graph are derived from the first full graph.
3. The method of claim 1, wherein obtaining a first set of partial maps from the plurality of full maps further comprises:
the first set of partial graphs is determined based on image quality.
4. The method of claim 1, wherein the image quality is related to at least one of:
the pose of the target in the image;
image resolution;
displaying the target key point in the image;
target recognition rate in the image.
5. The method of claim 1, wherein transmitting the first set of partial figures specifically comprises:
and detecting the extracted multiple full graphs according to the time sequence, and triggering to obtain a local graph of the disappeared target and sending the local graph to the storage device when the disappearance of the target is detected for each target.
6. The method according to claim 1, characterized in that:
and establishing a corresponding relation between the partial graph in the first group of partial graphs and the full graph with the partial graph in the image sent to the storage device.
7. The method of claim 1, wherein the obtaining a first set of partial graphs from the plurality of full graphs comprises:
obtaining a second group of partial images according to the plurality of full images, wherein partial images in the second group of partial images record the same target, and the partial images recording the same target come from different full images;
and selecting the first group of partial graphs according to the second group of partial graphs, wherein different targets are recorded by different partial graphs in the first group of partial graphs.
8. A camera image transmission apparatus, comprising:
the full-image acquisition module is used for extracting a plurality of full-images recorded with a plurality of targets from the video stream, wherein each full-image is recorded with at least one target;
the local image acquisition module is used for acquiring a first group of local images according to the plurality of full images, wherein different local images record different targets, and at least two local images in the first group of local images are acquired according to the same full image in the plurality of full images;
and the sending module is used for sending the first group of partial graphs and the full graphs corresponding to the partial graphs in the first group of partial graphs to the storage device, wherein the number of the sent full graphs is smaller than the number of the partial graphs.
9. The camera image transmission apparatus according to claim 8, wherein:
the first group of partial images comprises a first partial image and a second partial image, the full image sent by the sending module comprises a first full image, the first partial image and the second partial image both correspond to the first full image, and the first partial image and the second partial image are derived from the first full image.
10. The camera image transmission apparatus according to claim 8, wherein the partial map acquisition module is configured to:
the first set of partial graphs is determined based on image quality.
11. The camera image transmission apparatus according to claim 9 or 10, wherein the image quality of a partial map in the first group of partial maps is not the best image quality of the object among the plurality of full maps.
12. The camera image transmission apparatus according to claim 8, wherein the partial map acquisition module is further configured to:
and establishing a corresponding relation between the partial graph in the first group of partial graphs and the full graph with the partial graph in the image sent to the storage device.
13. The camera image transmission device according to claim 8, wherein the partial map acquisition module is specifically configured to:
obtaining a second group of partial images according to the plurality of full images, wherein partial images in the second group of partial images record the same target, and the partial images recording the same target come from different full images;
and selecting the first group of partial graphs according to the second group of partial graphs, wherein different targets are recorded by different partial graphs in the first group of partial graphs.
14. A video camera, comprising:
the lens is used for receiving light rays;
the sensor is used for photoelectric conversion and generating an original image;
a processor system for processing an original image, and:
extracting a plurality of full graphs recorded with a plurality of targets from a video stream, wherein each full graph records at least one target;
obtaining a first group of partial images according to the plurality of full images, wherein the first group of partial images are a plurality of partial images, different targets are recorded in different partial images, and at least two partial images in the first group of partial images are obtained according to the same full image in the plurality of full images;
and transmitting the first group of partial graphs and the full graphs corresponding to the partial graphs in the first group of partial graphs to a storage device, wherein the number of the transmitted full graphs is smaller than the number of the partial graphs.
CN202111645121.8A 2021-12-16 2021-12-30 Video camera image transmission method and device and video camera Pending CN116347020A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/139015 WO2023109867A1 (en) 2021-12-16 2022-12-14 Camera image transmission method and apparatus, and camera

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2021115409406 2021-12-16
CN202111540940 2021-12-16

Publications (1)

Publication Number Publication Date
CN116347020A true CN116347020A (en) 2023-06-27

Family

ID=86880913

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111645121.8A Pending CN116347020A (en) 2021-12-16 2021-12-30 Video camera image transmission method and device and video camera

Country Status (1)

Country Link
CN (1) CN116347020A (en)

Similar Documents

Publication Publication Date Title
CN102892008B (en) Dual image capture processes
US11790504B2 (en) Monitoring method and apparatus
CN107124604B (en) A kind of method and device for realizing 3-D image using dual camera
CN107959778B (en) Imaging method and device based on dual camera
KR101691551B1 (en) Photographing apparatus and method
CN108111749B (en) Image processing method and device
CN108055452A (en) Image processing method, device and equipment
CN107925751A (en) For multiple views noise reduction and the system and method for high dynamic range
US20130278802A1 (en) Exposure timing manipulation in a multi-lens camera
CN107800965B (en) Image processing method, device, computer readable storage medium and computer equipment
WO2021073140A1 (en) Monocular camera, and image processing system and image processing method
CN110490042B (en) Face recognition device and entrance guard's equipment
US8908054B1 (en) Optics apparatus for hands-free focus
CN108712608A (en) Terminal device image pickup method and device
CN108024054A (en) Image processing method, device and equipment
CN111669483B (en) Image sensor, imaging device, electronic apparatus, image processing system, and signal processing method
CN102959942B (en) Image capture device for stereoscopic viewing-use and control method thereof
CN107613216B (en) Focusing method, device, computer readable storage medium and electronic equipment
CN109559353A (en) Camera module scaling method, device, electronic equipment and computer readable storage medium
WO2020155739A1 (en) Image sensor, method for acquiring image data from image sensor, and camera device
JP5348258B2 (en) Imaging device
CN108805921A (en) Image-taking system and method
CN110930440B (en) Image alignment method, device, storage medium and electronic equipment
CN110930340B (en) Image processing method and device
CN111783563A (en) Double-spectrum-based face snapshot and monitoring method, system and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication