CN114531554B - Video fusion synthesis method and device of express mail code recognizer - Google Patents

Video fusion synthesis method and device of express mail code recognizer Download PDF

Info

Publication number
CN114531554B
CN114531554B CN202210432936.6A CN202210432936A CN114531554B CN 114531554 B CN114531554 B CN 114531554B CN 202210432936 A CN202210432936 A CN 202210432936A CN 114531554 B CN114531554 B CN 114531554B
Authority
CN
China
Prior art keywords
frame
camera
frames
express
code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210432936.6A
Other languages
Chinese (zh)
Other versions
CN114531554A (en
Inventor
严振声
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Huayan Vision Technology Co ltd
Original Assignee
Zhejiang Huayan Vision Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Huayan Vision Technology Co ltd filed Critical Zhejiang Huayan Vision Technology Co ltd
Priority to CN202210432936.6A priority Critical patent/CN114531554B/en
Publication of CN114531554A publication Critical patent/CN114531554A/en
Application granted granted Critical
Publication of CN114531554B publication Critical patent/CN114531554B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10821Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
    • G06K7/10831Arrangement of optical elements, e.g. lenses, mirrors, prisms

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Electromagnetism (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Sorting Of Articles (AREA)
  • Character Discrimination (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

When at least two express code recognition machines call the video live of the same global camera, the global camera adjusts the frame rate to the least common multiple of the code scanning camera frame rate, the ratio of the global camera frame rate to each code scanning camera frame rate is calculated and used as a coding factor, after the video stream of the global camera is obtained, I frames and P frames corresponding to the coding factor multiple are selected from a coding group, for each selected P frame, P frames between the selected P frame and the previous selected frame are obtained and fused, the corresponding fused P frames are obtained, and the selected I frames and the fused P frames are used for generating a new video stream of the global camera. When the two recognition machines with different frame rates call the same global camera, the invention can obtain the global camera code stream with clear images, and can store the global camera code stream in a synthesis way, thereby comprehensively storing the evidence of picking up the file.

Description

Video fusion synthesis method and device of express mail code recognizer
Technical Field
The application belongs to the technical field of express delivery pickup evidence preservation, and particularly relates to a video fusion synthesis method and device of an express delivery code recognition machine.
Background
With the development of electronic commerce, express logistics is becoming more and more a means for sending goods, and goods sent by express are also called express. However, when the courier delivers the courier, the courier is difficult to directly deliver to the receiver due to the fact that the receiver is not at the receiver address and the like. Therefore, express delivery cabinets, express delivery post stations and the like can be delivered at the same time, great convenience is provided for temporary placement and collection of express delivery of residents, and express delivery manpower is saved.
At present, a plurality of goods shelves are arranged in an express post house, and each layer of each goods shelf is provided with an independent number. When the courier puts the couriers on the shelf, the couriers are allocated with independent numbers 'XX numbers XXX layer XXX numbers' and are registered in a courier code scanning system for warehousing. The express post office sends the information to a receiver mobile phone, and the receiver gets the express according to the information to a related express post station. And after finding the express according to the received position information of the express, the receiver places the express on an express code recognition machine for recognition and takes away.
However, in the existing courier station, a courier loss event caused by mistaken fetching still occurs, and related personnel information needs to be searched by calling related monitoring videos according to time points, but videos of all cameras of the existing courier station are independently stored, the information searching is complicated, and monitoring blind areas possibly exist, and the related personnel information cannot be accurately searched.
Disclosure of Invention
The application aims to provide a video fusion and synthesis method and device of an express code recognition machine, so that relevant evidence of a person who takes a mail can be accurately stored.
In order to achieve the purpose, the technical scheme of the application is as follows:
the utility model provides a video fusion synthetic method of express mail sign indicating number recognition machine, is applied to the express mail post house, the express mail post house is provided with the global camera to and two at least express mail sign indicating numbers recognition machine, express mail sign indicating number recognition machine is provided with sweeps a yard camera, express mail sign indicating number recognition machine still is provided with gets a man camera, and the yard camera of sweeping of same express mail sign indicating number recognition machine is the same with the frame rate of getting a man camera, express mail sign indicating number recognition machine when discerning express mail success, will sweep yard camera, get the video synthesis of a man camera and global camera shooting and preserve, the video fusion synthetic method of express mail sign indicating number recognition machine, includes:
if the at least two express code recognition machines call the video live of the same global camera, the global camera adjusts the frame rate to be the least common multiple of the frame rate of the code scanning camera;
calculating the ratio of the global camera frame rate to each code scanning camera frame rate as a coding factor;
after acquiring a global camera video stream, selecting I frames and P frames corresponding to coding factor multiples from a coding group, acquiring a P frame between each selected P frame and a previous selected frame, fusing to obtain a corresponding fused P frame, and generating a new global camera video stream by using the selected I frames and the fused P frames;
and synthesizing and storing the code scanning camera video, the pickup person camera video and the new global camera video.
Further, when the express code recognition machine exits from calling the video live of the same global camera, the global camera adjusts the frame rate to be the least common multiple of the frame rates of the residual code scanning cameras.
Further, after acquiring the global camera video stream, selecting an I frame and a P frame corresponding to the coding factor multiple from the coding group, acquiring, for each selected P frame, a P frame between the selected P frame and a previous selected frame, and fusing the P frames to obtain a corresponding fused P frame, and generating a new global camera video stream from the selected I frame and the fused P frame, including:
after receiving the video stream of the global camera, the express code recognition machine selects I frames and P frames corresponding to the multiples of the coding factors from the coding group according to the coding factors corresponding to the code scanning camera, obtains the P frame between each selected P frame and the previous selected frame for fusion to obtain the corresponding fused P frame, and generates a new video stream of the global camera by using the selected I frames and the fused P frame.
Further, after acquiring the global camera video stream, selecting an I frame and a P frame corresponding to the coding factor multiple from the coding group, acquiring, for each selected P frame, a P frame between the selected P frame and a previous selected frame, and fusing the P frames to obtain a corresponding fused P frame, and generating a new global camera video stream from the selected I frame and the fused P frame, including:
the network equipment accessed by the express code recognizer acquires a coding factor corresponding to the express code recognizer, selects an I frame and a P frame corresponding to a coding factor multiple from a received global camera video stream coding group according to a coding factor interval, acquires a P frame between each selected P frame and a previous selected frame, fuses to obtain a corresponding fused P frame, generates a new global camera video stream from the selected I frame and the fused P frame, and sends the new global camera video stream to the corresponding express code recognizer.
The application also provides a video fusion synthesis device of the express mail code recognition machine, which comprises a processor and a memory, wherein the memory is stored with a plurality of computer instructions, and the computer instructions are executed by the processor to realize the steps of the video fusion synthesis method of the express mail code recognition machine.
When at least two express code recognition machines call the video live of the same global camera, the global camera adjusts the frame rate to be the least common multiple of the code scanning camera frame rate, the ratio of the global camera frame rate to each code scanning camera frame rate is calculated and used as a coding factor, after the video stream of the global camera is obtained, I frames and P frames corresponding to the coding factor multiple are selected from a coding group, for each selected P frame, P frames between the selected P frame and the previous selected frame are obtained and fused, and the corresponding fused P frames are obtained; and generating a new global camera video stream by the selected I frame and the fused P frame. According to the technical scheme, the problem that videos are fuzzy due to different frame rates when two recognition machines with different frame rates call the same global camera can be solved. According to the technical scheme, the global camera code stream with clear images can be obtained and synthesized for storage, and the pickup evidence can be comprehensively stored.
Drawings
Fig. 1 is a flowchart of a video fusion synthesis method of the application of the flash code recognizer.
Fig. 2 is a schematic diagram of P frame fusion according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The application provides a video fusion synthesis method and device of a express mail code recognizer, which are applied to express mail posters, wherein express mail posters are all provided with express mail code recognizers at present so that express mail extractors can scan codes and take the codes out of warehouses after picking mails. This kind of mode is convenient for the management of express mail posthouse, has reduced the cost of labor. In order to reduce the wrong taking of express mails and provide express mail taking evidence, the conventional express mail posthouse is provided with a global camera for monitoring the whole express mail posthouse. The global cameras are usually distributed at each corner of the courier post, and cover the whole courier post completely.
The express code recognizer is provided with a code scanning camera facing downwards and is connected with a global camera through a network. In addition, the express code recognition machine is also provided with a pickup person camera, and the pickup person shoots the front video image of the pickup person when scanning the code and leaving the warehouse. When a person taking the express mail places the express mail on the table board and scans the code to be delivered out of the warehouse, the code scanning camera can display the express mail picture on the display screen in real time, meanwhile, the code scanning camera can detect the bar code on the package, after the express mail is found, the bar code area in the picture can display a rectangular frame, and a sound prompt is sent to prompt the successful recognition. And meanwhile, the pick-up person camera is also used for shooting the front video of the pick-up person when the pick-up person scans the code and goes out of the warehouse.
In order to keep the video evidence of picking up the piece by the picking-up person, the quick piece code recognition machine stores the video shot by the code scanning camera, the video shot by the picking-up person camera and the called global camera video as the picking-up evidence.
Specifically, when a person who takes a mail places the express code in front of the express code recognition machine to scan the express code, the face of the person is not necessarily directly facing the express code recognition machine, and the orientation may shake and change. The orientation of the pickup can be seen through the pickup camera. According to the face orientation of a pickup person, the express code recognition machine needs to call a global camera which can most shoot the front face of the pickup person to obtain a real-time video, switch the global camera and splice the video on a time line if necessary, and combine the video and the video of the pickup person camera into the video shot by the code scanning camera to serve as a video evidence for the end.
And before the pickup person reaches the express code recognition machine, the pickup person camera of the express code recognition machine selects the most appropriate global camera according to the orientation of the pickup person and the position information shared between the cameras, applies for a live video stream and acquires the long-range video of the pickup person.
When the express code recognition machine finds that the orientation of the pickup person is changed through the pickup person camera, a more appropriate global camera is found according to the position information shared among the cameras, and a live video stream is applied to the new global camera. And after obtaining the new live stream, stopping the video stream of the original global camera on demand.
And the express code recognition machine intercepts a plurality of sections of videos of the global camera during the code scanning period of the express person, intercepts videos starting from the I frame from each section of video stream, splices the videos according to time sequence, and synthesizes a new total video stream of the global camera. And simultaneously intercepting a live video stream collected by the pickup camera during the code scanning of the pickup.
And the express code recognition machine takes the intercepted total video stream of the global camera and the intercepted live video stream of the pick-up person camera as small pictures to be embedded into the table surface video stream shot by the code scanning camera, and the small pictures are used as video stream evidence to be left at the bottom.
Multiple video compositions are typically performed by extracting frames from each video stream and then combining them in a uniform layout. For example, if the frame rates of the code scanning camera and the pickup camera of the express code recognition machine are 25 frames/second, 25 frames in a 1-second time period of the video stream of the express placing table and the video stream of the pickup camera are extracted, 25 frames in a 1-second time period of the global camera are extracted, and the pictures are merged and rearranged to generate a new video according to a uniform layout (for example, the frame image of the pickup camera and the frame image of the global camera are overlapped to the lower left corner and the lower right corner of the frame image of the video stream of the express placing table).
The problem that may exist is that the express code recognizer a and the express code recognizer B simultaneously apply for a live stream from one global camera, since the former frame rate is 25 frames/second and the latter frame rate is 30 frames/second, while the global camera can only encode one video stream. For this reason, the video encoding method of the global camera needs to be changed to adapt to the video composition method.
In one embodiment, as shown in fig. 1, a video fusion synthesis method of a express mail code recognition machine is provided, and is applied to a express mail post, where the express mail post is provided with a global camera and at least two express mail code recognition machines, the express mail code recognition machine is provided with a code scanning camera, the express mail code recognition machine is further provided with a pickup camera, the code scanning camera of the same express mail code recognition machine is the same as the frame rate of the pickup camera, and when the express mail recognition machine recognizes express mail successfully, the video shot by the code scanning camera, the pickup camera and the global camera is synthesized, and the video fusion synthesis method of the express mail code recognition machine includes:
and step S1, if at least two express code identification machines call the video live of the same global camera, the global camera adjusts the frame rate to the least common multiple of the frame rate of the code scanning camera.
In this embodiment, a courier station is exemplified by two courier code identification machines, which are an identification machine a and an identification machine B, respectively, and a code scanning camera and a pickup camera in each identification machine are cameras equipped in the same device, and their frame rates are the same. However, the frame rates of the cameras of the recognition device a and the recognition device B are different. The following explains the case where the camera frame rate of the recognition device a is 25 frames/second and the camera frame rate of the recognition device B is 30 frames/second.
When only one express code recognition machine applies for a live video stream to the global camera, the express code recognition machine embeds the frame rate information of the express code recognition machine in a request message for broadcasting the live stream, for example, the code scanner A embeds the frame rate information of 25 frames/second in the request message; after receiving the message, the global camera changes the frame rate of video coding to 25 frames/second and sends the video coding through multicast; and the code scanner A receives the video stream and synthesizes the video stream.
When two code scanners apply for live streams to the same global camera and the global camera receives a request, and finds that the respective frame rates of the two code scanners are different, the minimum common multiple of the frame rates is taken, and the minimum common multiple of 25 and 30 is 150. The global camera modifies the frame rate to 150 frames/second for multicast transmission, and transmits a strategy of 150 frames/second to the code scanner A and the code scanner B.
Since each P frame in the video stream represents the difference of the current frame with respect to the previous frame, if each scanner takes P frames from the video stream of the global camera at intervals, the video stream will be blurred due to the loss of information. The present application proposes a method of eliminating such ambiguities, which is set forth in subsequent steps.
And step S2, calculating the ratio of the global camera frame rate to the code scanning camera frame rate as a coding factor.
The method comprises the following steps of firstly calculating the ratio of the frame rate of the global camera to the frame rate of each code scanning camera as a coding factor, wherein the coding factor comprises the following steps:
for the recognition machine A, the frame rate is 25 frames/second, the global camera frame rate is 150 frames/second, and the coding factor A is equal to 6;
for recognition machine B, the frame rate is 30 frames/sec, the global camera frame rate is 150 frames/sec, and the encoding factor a is equal to 5.
It should be noted that the coding factor may be calculated by the express code identifier, or calculated by the global camera, and no matter which device calculates, only the frame rate of the global camera and the frame rate of each code scanning camera need to be obtained, and after the coding factor is obtained through calculation, the coding factor may be shared by other devices through the network, so that the other devices obtain the coding factor, which is not described in detail below.
And step S3, after the global camera video stream is obtained, selecting I frames and P frames corresponding to the coding factor multiples from the coding group, obtaining the P frame between each selected P frame and the previous selected frame for fusion to obtain the corresponding fused P frame, and generating a new global camera video stream by the selected I frames and the fused P frames.
According to the method, the global camera adopts frame rate coding of the least common multiple, and after coding, each coding group GOP of the global camera comprises 150 frames.
In this embodiment, each coded group GOP of the video stream sent from the global camera includes 150 frames, and in order to match the frame rate of the express code identification code scanning camera, some frames need to be selected from the coded group GOP to form a new global camera video stream. This step may be performed in the express code recognizer, or may be performed in another network device, which will be described below.
In a specific embodiment, the operation is performed by a express code recognizer, comprising:
after receiving the video stream of the global camera, the express code recognizer selects I frames and P frames corresponding to the multiple of the coding factors from the coding group according to the coding factors corresponding to the code scanning camera, obtains the P frames between each selected P frame and the previous selected frame for fusion to obtain the corresponding fused P frames, and generates a new video stream of the global camera by using the selected I frames and the fused P frames.
After receiving the video stream of the global camera, the express code identifier of this embodiment needs to select a corresponding frame number from the video stream, where the selected frame number is the same as the frame number in one coding group of the code scanning camera.
For example, if the frame rate of recognition device a is 25 frames/second, 25 frames need to be selected from 150 frames. At this time, an I frame and a P frame corresponding to the multiple of the coding factor are selected from the global camera coding group. I.e. the first I frame is selected, and P6, P12 …, P144, these selected P frames are numbered by a multiple of the coding factor, which is 6.
Similarly, if the frame rate of the recognition device B is 30 frames/second, 30 frames need to be selected from 150 frames. At this time, an I frame and a P frame corresponding to the multiple of the coding factor are selected from the coding group. I.e. the first I frame is selected, and P5, P10 …, P145, these selected P frames are numbered by a factor of 5.
However, if these selected frames are directly used as the video code stream of the global camera, the difference between P6 and I frame is large, and the difference between P12 and P6 is large, which easily causes the resulting video to be blurry.
Therefore, the present application adjusts the selected P frame:
for 25 frames selected by recognizer A, except I frame, the rest P frames are respectively fused with the information of the previous unselected P frames to generate new P frame, and the information contained in this P frame is the difference value of this frame with respect to the previous selected P frame.
For example, as shown in fig. 2, P6 fuses information of P1 to P5, generating a new P6 frame; the P12 fuses the information of P7 to P11, generating a new P12 frame. New P18, new P24, new P30 … and new P144, in turn, using the same strategy.
Obviously, the information of the new P6 frame is the difference of the picture of the present frame with respect to the I frame, and the information of the new P12 frame is the difference of the picture of the present frame with respect to the new P6 frame.
For 30 frames selected by recognizer B, I frame, P5 frame, P10 frame, and up to P145 frame, 30 frames. Then, except the I frame, the other P frames are respectively fused with the information of the previous several unselected P frames to generate a new P frame, and the information contained in the P frame is the difference value of the frame relative to the previous selected P frame.
For example, P5 fuses information of P1 to P4 to generate a new P5 frame, and P10 fuses information of P6 to P9 to generate a new P10 frame. New P15, new P20, new P25 … and new P145, in turn, adopt the same strategy.
In this embodiment, the code scanner a finally receives the multicast stream sent by the global camera, selects the I frame and the fused P6, P12, and P18 … P144, and performs image merging on each GOP to generate a new video stream. Wherein the P frames are all new P frames after fusion.
And the code scanner B receives the multicast stream sent by the global camera, selects the I frame and the fused P5, P10 and P15 … P145, and takes 30 frames for each GOP to carry out image merging to generate a new video stream. Wherein the P frames are all new P frames after fusion.
In another specific embodiment, the generating of the new global camera video stream is performed by a network device accessed by a express code recognizer, and includes:
the network equipment accessed by the express code recognizer acquires a coding factor corresponding to the express code recognizer, selects an I frame and a P frame corresponding to a coding factor multiple from a received global camera video stream coding group according to a coding factor interval, acquires a P frame between each selected P frame and a previous selected frame, fuses to obtain a corresponding fused P frame, generates a new global camera video stream from the selected I frame and the fused P frame, and sends the new global camera video stream to the corresponding express code recognizer.
Different from the previous embodiment, in this embodiment, the network device accessed by the express code identifier processes the video stream sent by the global camera to generate a new global camera video stream, and the frame rate of the generated new video stream is consistent with that of the corresponding express code identifier. Because the express code recognizer is the main equipment of the express posthouse, the work born is more. And the network equipment accessed to the express code recognizer only transmits data, and the performance is relatively idle. In the embodiment, the network equipment is used for processing the video frames, so that the load of the express code recognizer is reduced, the express code recognizer obtains a new video stream without feeling, and extra consumption performance is not needed.
For example, after the network device connected to the identification machine a receives the video stream of the global camera, the selected 25 frames, except the I frame, are fused with the information of the previous several unselected P frames to generate a new P frame. These 25 frames are then sent to recognizer a, which generates a new video stream.
Similarly, after the network device connected with the identification machine B receives the video stream of the global camera, the selected 30 frames, except the I frame, are fused with the information of the previous unselected P frames respectively to generate a new P frame. These 30 frames are then sent to recognizer B, which generates a new video stream.
If the recognition machine A and the recognition machine B are connected with a network device together, the network device respectively selects, generates new video streams respectively, and sends the new video streams to the recognition machine A and the recognition machine B respectively, which is not described herein again.
And step S4, synthesizing and storing the code scanning camera video, the pickup person camera video and the new global camera video.
After a new global camera video stream is obtained, the video stream can be synthesized and stored as evidence. It is easy to understand that the recognition machine mainly aims to store the image of the pickup person when the pickup person scans the code in front of the recognition machine and goes out of the warehouse as evidence, and the image can be a video of a period of time before and after the recognition of the recognition machine, or a video of a period of time after the recognition of the recognition machine is successful. Therefore, after the videos of all the cameras are obtained, the videos can be synthesized through the code scanning machine, and the videos can also be synthesized through the code scanning cameras. The videos of the code scanning camera, the pickup person camera and the global camera can be packaged and stored, and the videos of the pickup person camera and the global camera can also be embedded into the video shot by the code scanning machine as picture-in-picture for storage.
In another embodiment, the present application further provides a video fusion and composition apparatus for a express code recognizer, including a processor and a memory storing several computer instructions, where the computer instructions, when executed by the processor, implement the steps of the video fusion and composition method for the express code recognizer.
For specific limitations of the video fusion and synthesis device of the express code identifier, reference may be made to the above limitations of the video fusion and synthesis method of the express code identifier, and details are not repeated here. The video fusion and synthesis device of the flash code recognizer can be wholly or partially realized by software, hardware and a combination thereof. The method can be embedded in hardware or independent from a processor in the computer device, and can also be stored in a memory in the computer device in software, so that the processor can call and execute the corresponding operation.
The memory and the processor are electrically connected, directly or indirectly, to enable transmission or interaction of data. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory stores a computer program that can be executed on the processor, and the processor executes the computer program stored in the memory, thereby implementing the network topology layout method in the embodiment of the present invention.
The Memory may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory is used for storing programs, and the processor executes the programs after receiving the execution instructions.
The processor may be an integrated circuit chip having data processing capabilities. The Processor may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like. The various methods, steps and logic blocks disclosed in embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (5)

1. The utility model provides a video fusion synthetic method of express mail code recognition machine, is applied to express mail post house, the express mail post house is provided with the global camera to and two at least express mail code recognition machines, express mail code recognition machine is provided with sweeps a yard camera, a serial communication port, express mail code recognition machine still is provided with gets a man's camera, and the yard camera of sweeping of same express mail code recognition machine is the same with the frame rate of getting a man's camera, express mail code recognition machine will sweep yard camera, get a man's camera and the video synthesis of global camera shooting and preserve when discerning express mail success, the video fusion synthetic method of express mail code recognition machine, include:
if the at least two express item code identification machines call the video live condition of the same global camera, the global camera adjusts the frame rate to be the least common multiple of the code scanning camera frame rate, the global camera adopts the frame rate coding of the least common multiple, the frame number of each coding group of the video stream of the global camera is the same as the frame number collected by the global camera every second, in one coding group, the first frame is an I frame, the rest frames are P frames, the frame number of the first P frame is set to be 1, and the subsequent P frames are sequentially subjected to frame numbering according to the sequence;
calculating the ratio of the global camera frame rate to each code scanning camera frame rate as a coding factor;
after acquiring a global camera video stream, selecting I frames and all P frames with frame numbers being multiple of coding factors from a coding group, acquiring a P frame between each selected P frame and a previous selected frame, fusing to obtain a corresponding fused P frame, and generating a new global camera video stream by the selected I frames and the fused P frames;
and synthesizing and storing the code scanning camera video, the pickup person camera video and the new global camera video.
2. The video fusion and synthesis method of the express code recognizer according to claim 1, wherein when the express code recognizer exits from retrieving the live video of the same global camera, the global camera adjusts the frame rate to the least common multiple of the frame rates of the remaining code-scanning cameras.
3. The video fusion and synthesis method for express code identifiers according to claim 1, wherein after the global camera video stream is obtained, all P frames with frame numbers being multiples of coding factors are selected from a coding group, for each selected P frame, a P frame between the selected P frame and a previous selected frame is obtained and fused to obtain a corresponding fused P frame, and the selected I frame and the fused P frame are used to generate a new global camera video stream, and the method comprises:
after receiving the video stream of the global camera, the express code recognizer selects I frames and all P frames with frame numbers being multiples of coding factors from a coding group according to the coding factors corresponding to the code scanning camera, obtains the P frames between each selected P frame and the previous selected frame for fusion to obtain the corresponding fused P frame, and generates a new video stream of the global camera by using the selected I frames and the fused P frames.
4. The video fusion and synthesis method of the express code recognizer according to claim 1, wherein after the global camera video stream is obtained, selecting an I frame and all P frames with frame numbers being multiples of coding factors from a coding group, for each selected P frame, obtaining a P frame between the selected P frame and a previous selected frame, performing fusion to obtain a corresponding fused P frame, and generating a new global camera video stream from the selected I frame and the fused P frame, comprises:
the network equipment accessed by the express code recognizer acquires the coding factor corresponding to the express code recognizer, selects I frames and all P frames with the frame numbers being multiples of the coding factor from the received global camera video stream coding group according to the coding factor interval, acquires the P frame between each selected P frame and the previous selected frame, fuses to obtain the corresponding fused P frame, generates a new global camera video stream by the selected I frame and the fused P frame, and sends the new global camera video stream to the corresponding express code recognizer.
5. A video fusion and synthesis device for a express code recognizer, comprising a processor and a memory storing computer instructions, wherein the computer instructions, when executed by the processor, implement the steps of the method according to any one of claims 1 to 4.
CN202210432936.6A 2022-04-24 2022-04-24 Video fusion synthesis method and device of express mail code recognizer Active CN114531554B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210432936.6A CN114531554B (en) 2022-04-24 2022-04-24 Video fusion synthesis method and device of express mail code recognizer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210432936.6A CN114531554B (en) 2022-04-24 2022-04-24 Video fusion synthesis method and device of express mail code recognizer

Publications (2)

Publication Number Publication Date
CN114531554A CN114531554A (en) 2022-05-24
CN114531554B true CN114531554B (en) 2022-08-16

Family

ID=81627984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210432936.6A Active CN114531554B (en) 2022-04-24 2022-04-24 Video fusion synthesis method and device of express mail code recognizer

Country Status (1)

Country Link
CN (1) CN114531554B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102780869A (en) * 2012-06-27 2012-11-14 宇龙计算机通信科技(深圳)有限公司 Video recording device and method
CN102959941A (en) * 2010-07-02 2013-03-06 索尼电脑娱乐公司 Information processing system, information processing device, and information processing method
CN106412581A (en) * 2016-06-21 2017-02-15 浙江大华技术股份有限公司 Frame-rate control method and device
CN107357936A (en) * 2017-08-16 2017-11-17 湖南城市学院 It is a kind of to merge multi-source image automatically to provide the context aware system and method for enhancing
CN109413355A (en) * 2018-11-05 2019-03-01 深圳市收收科技有限公司 A kind of method and terminal device for quickly transferring monitoring video
CN111209956A (en) * 2020-01-02 2020-05-29 北京汽车集团有限公司 Sensor data fusion method, and vehicle environment map generation method and system
CN112381894A (en) * 2021-01-15 2021-02-19 清华大学 Adaptive light field imaging calibration method, device and storage medium
CN112508144A (en) * 2020-12-17 2021-03-16 杭州海康机器人技术有限公司 Package query method, device and system
CN113691862A (en) * 2020-05-19 2021-11-23 深圳市环球数码科技有限公司 Video processing method, electronic equipment for video playing and video playing system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8744420B2 (en) * 2010-04-07 2014-06-03 Apple Inc. Establishing a video conference during a phone call
US10536702B1 (en) * 2016-11-16 2020-01-14 Gopro, Inc. Adjusting the image of an object to search for during video encoding due to changes in appearance caused by camera movement

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102959941A (en) * 2010-07-02 2013-03-06 索尼电脑娱乐公司 Information processing system, information processing device, and information processing method
CN102780869A (en) * 2012-06-27 2012-11-14 宇龙计算机通信科技(深圳)有限公司 Video recording device and method
CN106412581A (en) * 2016-06-21 2017-02-15 浙江大华技术股份有限公司 Frame-rate control method and device
CN107357936A (en) * 2017-08-16 2017-11-17 湖南城市学院 It is a kind of to merge multi-source image automatically to provide the context aware system and method for enhancing
CN109413355A (en) * 2018-11-05 2019-03-01 深圳市收收科技有限公司 A kind of method and terminal device for quickly transferring monitoring video
CN111209956A (en) * 2020-01-02 2020-05-29 北京汽车集团有限公司 Sensor data fusion method, and vehicle environment map generation method and system
CN113691862A (en) * 2020-05-19 2021-11-23 深圳市环球数码科技有限公司 Video processing method, electronic equipment for video playing and video playing system
CN112508144A (en) * 2020-12-17 2021-03-16 杭州海康机器人技术有限公司 Package query method, device and system
CN112381894A (en) * 2021-01-15 2021-02-19 清华大学 Adaptive light field imaging calibration method, device and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Heterogeneous Information Fusion and Visualization for a Large-Scale Intelligent Video Surveillance System;ChingTang Fan,et al.;《 IEEE Transactions on Systems, Man, and Cybernetics: Systems ( Volume: 47, Issue: 4, April 2017)》;20160308;全文 *
Quality-Based Fusion of Multiple Video Sensors for Video Surveillance;Lauro Snidaro,et al.;《 IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) ( Volume: 37, Issue: 4, Aug. 2007)》;20070709;全文 *
面向视窗依赖机制的全景视频传输优化研究;谢绍伟.;《中国优秀硕士学位论文全文数据库(电子期刊)》;20220115;全文 *

Also Published As

Publication number Publication date
CN114531554A (en) 2022-05-24

Similar Documents

Publication Publication Date Title
CN110297943B (en) Label adding method and device, electronic equipment and storage medium
CN110383274B (en) Method, device, system, storage medium, processor and terminal for identifying equipment
CN108062507B (en) Video processing method and device
CN114140945B (en) Cabinet grid application method and device of intelligent cabinet and computing equipment
CN112508144A (en) Package query method, device and system
CN114531554B (en) Video fusion synthesis method and device of express mail code recognizer
CN114638885A (en) Intelligent space labeling method and system, electronic equipment and storage medium
CN114531584B (en) Video interval synthesis method and device of express mail code recognizer
CN111309212A (en) Split-screen comparison fitting method, device, equipment and storage medium
CN108683879A (en) A kind of feature tag identification based on intelligent video camera head and quickly check system
JP2007318688A (en) Video information exchange method, server device, and client device
CN112417914B (en) Data scanning method and device and electronic equipment
CN114118119B (en) Control method and device of intelligent cabinet
CN102496010A (en) Method for recognizing business cards by combining preview images and photographed images
CN113327084A (en) Method, device and terminal for rapidly searching for takeout
CN114554113B (en) Express item code recognition machine express item person drawing method and device
CN107993130B (en) Service processing method and system and electronic equipment
CN113592389A (en) Self-service management method, system, equipment and storage medium for power system warehouse
CN111461639B (en) Order output information generation method, code scanning device and portable packaging tool
CN110544063B (en) Logistics platform driver on-site support system based on AR and method thereof
CN113344160A (en) Order processing method and device, electronic equipment and storage medium
CN112312041A (en) Image correction method and device based on shooting, electronic equipment and storage medium
CN117459682A (en) Image transmission method, device and system
CN111815247A (en) Material turnover control method and control system
US9790029B2 (en) Conveyor-using packing management system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant