CN110633605B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN110633605B
CN110633605B CN201810662738.2A CN201810662738A CN110633605B CN 110633605 B CN110633605 B CN 110633605B CN 201810662738 A CN201810662738 A CN 201810662738A CN 110633605 B CN110633605 B CN 110633605B
Authority
CN
China
Prior art keywords
image
value
video network
gray value
original image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810662738.2A
Other languages
Chinese (zh)
Other versions
CN110633605A (en
Inventor
彭庆太
韩杰
王艳辉
周国强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN201810662738.2A priority Critical patent/CN110633605B/en
Publication of CN110633605A publication Critical patent/CN110633605A/en
Application granted granted Critical
Publication of CN110633605B publication Critical patent/CN110633605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10821Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Abstract

The embodiment of the invention provides an image processing method and device, wherein the method is applied to a video network, the video network comprises a video network terminal, and the method comprises the following steps: the method comprises the steps that a video network terminal obtains an original image to be processed; the video network terminal adjusts the gray value of each pixel point of the original image according to a preset initial threshold value to obtain a first image, and analyzes the first image; if the video network terminal analyzes the first image and does not obtain an analysis result, the video network terminal adjusts the gray value of each pixel point of the original image again according to the calculated adjustment threshold value to obtain a second image, and analyzes the second image until the analysis result is obtained; the adjustment threshold is obtained by calculating the initial threshold, the preset threshold amplitude and the adjustment times of the gray value of each pixel point of the original image by the video network terminal. The embodiment of the invention improves the identification efficiency of two-dimensional code identification on the original image.

Description

Image processing method and device
Technical Field
The present invention relates to the field of video networking technologies, and in particular, to an image processing method and an image processing apparatus.
Background
The video networking is an important milestone of network development, is a higher-level form of the Internet, is a real-time network, can realize the real-time transmission of full-network high-definition videos which cannot be realized by the existing Internet, and pushes a plurality of Internet applications to high-definition video and high-definition face-to-face.
At present, in a two-dimension code identification scheme based on a video network, a video network terminal directly identifies an acquired original image, and the two-dimension code identification efficiency is low.
Disclosure of Invention
In view of the above, embodiments of the present invention are proposed to provide an image processing method and a corresponding image processing apparatus that overcome or at least partially solve the above-mentioned problems.
In order to solve the above problem, an embodiment of the present invention discloses an image processing method, which is applied to a video network, where the video network includes a video network terminal, and the method includes: the video network terminal acquires an original image to be processed; the video network terminal adjusts the gray value of each pixel point of the original image according to a preset initial threshold value to obtain a first image, and analyzes the first image; if the video network terminal analyzes the first image and does not obtain an analysis result, the video network terminal adjusts the gray value of each pixel point of the original image again according to the calculated adjustment threshold value to obtain a second image, and analyzes the second image until the analysis result is obtained; the adjustment threshold is obtained by calculating the initial threshold, a preset threshold amplitude and the adjustment times of the gray value of each pixel point of the original image by the video network terminal.
Optionally, the adjusting, by the video network terminal, the gray value of each pixel point of the original image according to a preset initial threshold includes: the video network terminal compares the gray value of each pixel point of the original image with the initial threshold value respectively; and the video network terminal sets the gray value of the pixel point of which the gray value is greater than or equal to the initial threshold value in the original image as a preset first gray value, and sets the gray value of the pixel point of which the gray value is less than the initial threshold value in the original image as a preset second gray value.
Optionally, the adjusting, by the video network terminal, the gray value of each pixel point of the original image according to the calculated adjustment threshold again includes: the video network terminal compares the gray value of each pixel point of the original image with the adjustment threshold value respectively; and the video network terminal sets the gray value of the pixel point of which the gray value is greater than or equal to the adjustment threshold value in the original image as the first gray value, and sets the gray value of the pixel point of which the gray value is less than the adjustment threshold value in the original image as the second gray value.
Optionally, the first grayscale value is 0 and the second grayscale value is 255, or the first grayscale value is 255 and the second grayscale value is 0.
Optionally, the adjustment threshold is calculated by: s' ═ S + T × (n-1); wherein S' is the adjustment threshold, S is the initial threshold, T is the threshold amplification, n is the adjustment times, 255 is larger than S and larger than 0, T is larger than 1, n is larger than or equal to 2, and S and T are integers.
Optionally, the parsing, by the video network terminal, the first image includes: the video network terminal scans the first image by a two-dimensional code; the video network terminal analyzes the second image, and the method comprises the following steps: and the video network terminal scans the second image by the two-dimensional code.
Optionally, the analysis result is two-dimensional code information.
Optionally, the adjustment threshold is greater than or equal to 20, and/or the adjustment threshold is less than or equal to 150; the threshold increase is equal to 5.
The embodiment of the invention also discloses an image processing device, which is applied to the video network terminal in the video network, and comprises: the acquisition module is used for acquiring an original image to be processed; the adjusting module is used for adjusting the gray value of each pixel point of the original image according to a preset initial threshold value to obtain a first image; the analysis module is used for analyzing the first image; the adjusting module is further configured to, if the analyzing module analyzes the first image and does not obtain an analysis result, adjust the gray value of each pixel point of the original image again according to the calculated adjusting threshold to obtain a second image; the analysis module is further configured to analyze the second image until a new image obtained by adjusting the gray value of each pixel point of the original image by the adjustment module is analyzed to obtain an analysis result; the adjustment threshold is obtained by calculating the initial threshold, a preset threshold amplitude and the adjustment times of the gray value of each pixel point of the original image by the video network terminal.
Optionally, the adjusting module includes: the comparison module is used for comparing the gray value of each pixel point of the original image with the initial threshold value respectively; and the setting module is used for setting the gray value of the pixel point of which the gray value is greater than or equal to the initial threshold value in the original image as a preset first gray value and setting the gray value of the pixel point of which the gray value is less than the initial threshold value in the original image as a preset second gray value.
Optionally, the comparing module is further configured to compare the gray scale value of each pixel point of the original image with the adjustment threshold respectively; the setting module is further configured to set the gray scale value of the pixel point of which the gray scale value is greater than or equal to the adjustment threshold value in the original image as the first gray scale value, and set the gray scale value of the pixel point of which the gray scale value is less than the adjustment threshold value in the original image as the second gray scale value.
Optionally, the first grayscale value is 0 and the second grayscale value is 255, or the first grayscale value is 255 and the second grayscale value is 0.
Optionally, the apparatus further comprises: a calculating module, configured to calculate the adjustment threshold by S' ═ S + T × (n-1); wherein S' is the adjustment threshold, S is the initial threshold, T is the threshold amplification, n is the adjustment times, 255 is larger than S and larger than 0, T is larger than 1, n is larger than or equal to 2, and S and T are integers.
Optionally, the analysis module is configured to perform two-dimensional code scanning on the first image; the analysis module is further configured to perform two-dimensional code scanning on the second image.
Optionally, the analysis result is two-dimensional code information.
Optionally, the adjustment threshold is greater than or equal to 20, and/or the adjustment threshold is less than or equal to 150; the threshold increase is equal to 5.
The embodiment of the invention has the following advantages:
the embodiment of the invention is applied to a video network, a video network terminal acquires an original image to be processed, the video network terminal adjusts the gray value of each pixel point of the original image according to a preset initial threshold value to obtain a first image, and the first image is analyzed; if the analysis result is not obtained, the video network terminal adjusts the gray value of each pixel point of the original image again according to the calculated adjustment threshold value to obtain a second image, analyzes the second image, and if the analysis result is not obtained yet, the video network terminal continues to adjust the gray value of each pixel point of the original image to obtain a new image and analyzes the new image until the analysis result is obtained through analysis.
According to the embodiment of the invention, by applying the characteristics of the video network, after the video network terminal acquires the original image, the gray value of each pixel point is adjusted to obtain a new image, and then the new image is analyzed to obtain an analysis result. According to the embodiment of the invention, the identification efficiency of the two-dimensional code identification of the original image is improved.
Drawings
FIG. 1 is a schematic networking diagram of a video network of the present invention;
FIG. 2 is a schematic diagram of a hardware architecture of a node server according to the present invention;
fig. 3 is a schematic diagram of a hardware architecture of an access switch of the present invention;
fig. 4 is a schematic diagram of a hardware structure of an ethernet protocol conversion gateway according to the present invention;
FIG. 5 is a flow chart of the steps of an embodiment of an image processing method of the present invention;
fig. 6 is a block diagram of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The video networking is an important milestone for network development, is a real-time network, can realize high-definition video real-time transmission, and pushes a plurality of internet applications to high-definition video, and high-definition faces each other.
The video networking adopts a real-time high-definition video exchange technology, can integrate required services such as dozens of services of video, voice, pictures, characters, communication, data and the like on a system platform on a network platform, such as high-definition video conference, video monitoring, intelligent monitoring analysis, emergency command, digital broadcast television, delayed television, network teaching, live broadcast, VOD on demand, television mail, Personal Video Recorder (PVR), intranet (self-office) channels, intelligent video broadcast control, information distribution and the like, and realizes high-definition quality video broadcast through a television or a computer.
To better understand the embodiments of the present invention, the following description refers to the internet of view:
some of the technologies applied in the video networking are as follows:
network Technology (Network Technology)
Network technology innovation in video networking has improved over traditional Ethernet (Ethernet) to face the potentially enormous video traffic on the network. Unlike pure network Packet Switching (Packet Switching) or network Circuit Switching (Circuit Switching), the internet of vision technology employs network Packet Switching to satisfy the demand of Streaming (translated into Streaming, and continuous broadcasting, which is a data transmission technology, converting received data into a stable and continuous stream, and continuously transmitting the stream, so that the sound heard by the user or the image seen by the user is very smooth, and the user can start browsing on the screen before the whole data is transmitted). The video networking technology has the advantages of flexibility, simplicity and low price of packet switching, and simultaneously has the quality and safety guarantee of circuit switching, thereby realizing the seamless connection of the whole network switching type virtual circuit and the data format.
Switching Technology (Switching Technology)
The video network adopts two advantages of asynchronism and packet switching of the Ethernet, eliminates the defects of the Ethernet on the premise of full compatibility, has end-to-end seamless connection of the whole network, is directly communicated with a user terminal, and directly bears an IP data packet. The user data does not require any format conversion across the entire network. The video networking is a higher-level form of the Ethernet, is a real-time exchange platform, can realize the real-time transmission of the whole-network large-scale high-definition video which cannot be realized by the existing Internet, and pushes a plurality of network video applications to high-definition and unification.
Server Technology (Server Technology)
The server technology on the video networking and unified video platform is different from the traditional server, the streaming media transmission of the video networking and unified video platform is established on the basis of connection orientation, the data processing capacity of the video networking and unified video platform is independent of flow and communication time, and a single network layer can contain signaling and data transmission. For voice and video services, the complexity of video networking and unified video platform streaming media processing is much simpler than that of data processing, and the efficiency is greatly improved by more than one hundred times compared with that of a traditional server.
Storage Technology (Storage Technology)
The super-high speed storage technology of the unified video platform adopts the most advanced real-time operating system in order to adapt to the media content with super-large capacity and super-large flow, the program information in the server instruction is mapped to the specific hard disk space, the media content is not passed through the server any more, and is directly sent to the user terminal instantly, and the general waiting time of the user is less than 0.2 second. The optimized sector distribution greatly reduces the mechanical motion of the magnetic head track seeking of the hard disk, the resource consumption only accounts for 20% of that of the IP internet of the same grade, but concurrent flow which is 3 times larger than that of the traditional hard disk array is generated, and the comprehensive efficiency is improved by more than 10 times.
Network Security Technology (Network Security Technology)
The structural design of the video network completely eliminates the network security problem troubling the internet structurally by the modes of independent service permission control each time, complete isolation of equipment and user data and the like, generally does not need antivirus programs and firewalls, avoids the attack of hackers and viruses, and provides a structural carefree security network for users.
Service Innovation Technology (Service Innovation Technology)
The unified video platform integrates services and transmission, and is not only automatically connected once whether a single user, a private network user or a network aggregate. The user terminal, the set-top box or the PC are directly connected to the unified video platform to obtain various multimedia video services in various forms. The unified video platform adopts a menu type configuration table mode to replace the traditional complex application programming, can realize complex application by using very few codes, and realizes infinite new service innovation.
Networking of the video network is as follows:
the video network is a centralized control network structure, and the network can be a tree network, a star network, a ring network and the like, but on the basis of the centralized control node, the whole network is controlled by the centralized control node in the network.
As shown in fig. 1, the video network is divided into two parts, an access network and a metropolitan network.
The devices of the access network part can be mainly classified into 3 types: node server, access switch, terminal (including various set-top boxes, coding boards, memories, etc.). The node server is connected to an access switch, which may be connected to a plurality of terminals and may be connected to an ethernet network.
The node server is a node which plays a centralized control function in the access network and can control the access switch and the terminal. The node server can be directly connected with the access switch or directly connected with the terminal.
Similarly, devices of the metropolitan network portion may also be classified into 3 types: a metropolitan area server, a node switch and a node server. The metro server is connected to a node switch, which may be connected to a plurality of node servers.
The node server is a node server of the access network part, namely the node server belongs to both the access network part and the metropolitan area network part.
The metropolitan area server is a node which plays a centralized control function in the metropolitan area network and can control a node switch and a node server. The metropolitan area server can be directly connected with the node switch or directly connected with the node server.
Therefore, the whole video network is a network structure with layered centralized control, and the network controlled by the node server and the metropolitan area server can be in various structures such as tree, star and ring.
The access network part can form a unified video platform (the part in the dotted circle), and a plurality of unified video platforms can form a video network; each unified video platform may be interconnected via metropolitan area and wide area video networking.
Video networking device classification
1.1 devices in the video network of the embodiment of the present invention can be mainly classified into 3 types: servers, switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.). The video network as a whole can be divided into a metropolitan area network (or national network, global network, etc.) and an access network.
1.2 wherein the devices of the access network part can be mainly classified into 3 types: node servers, access switches (including ethernet gateways), terminals (including various set-top boxes, code boards, memories, etc.).
The specific hardware structure of each access network device is as follows:
a node server:
as shown in fig. 2, the system mainly includes a network interface module 201, a switching engine module 202, a CPU module 203, and a disk array module 204.
The network interface module 201, the CPU module 203, and the disk array module 204 all enter the switching engine module 202; the switching engine module 202 performs an operation of looking up the address table 205 on the incoming packet, thereby obtaining the direction information of the packet; and stores the packet in a queue of the corresponding packet buffer 206 based on the packet's steering information; if the queue of the packet buffer 206 is nearly full, it is discarded; the switching engine module 202 polls all packet buffer queues for forwarding if the following conditions are met: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero. The disk array module 204 mainly implements control over the hard disk, including initialization, read-write, and other operations on the hard disk; the CPU module 203 is mainly responsible for protocol processing with an access switch and a terminal (not shown in the figure), configuring an address table 205 (including a downlink protocol packet address table, an uplink protocol packet address table, and a data packet address table), and configuring the disk array module 204.
The access switch:
as shown in fig. 3, the network interface module (downstream network interface module 301, upstream network interface module 302), the switching engine module 303, and the CPU module 304 are mainly included.
Wherein, the packet (uplink data) coming from the downlink network interface module 301 enters the packet detection module 305; the packet detection module 305 detects whether the Destination Address (DA), the Source Address (SA), the packet type, and the packet length of the packet meet the requirements, if so, allocates a corresponding stream identifier (stream-id) and enters the switching engine module 303, otherwise, discards the stream identifier; the packet (downstream data) coming from the upstream network interface module 302 enters the switching engine module 303; the data packet coming from the CPU module 204 enters the switching engine module 303; the switching engine module 303 performs an operation of looking up the address table 306 on the incoming packet, thereby obtaining the direction information of the packet; if the packet entering the switching engine module 303 is from the downstream network interface to the upstream network interface, the packet is stored in the queue of the corresponding packet buffer 307 in association with the stream-id; if the queue of the packet buffer 307 is nearly full, it is discarded; if the packet entering the switching engine module 303 is not from the downlink network interface to the uplink network interface, the data packet is stored in the queue of the corresponding packet buffer 307 according to the guiding information of the packet; if the queue of the packet buffer 307 is nearly full, it is discarded.
The switching engine module 303 polls all packet buffer queues, which in this embodiment of the present invention is divided into two cases:
if the queue is from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queued packet counter is greater than zero; 3) and obtaining the token generated by the code rate control module.
If the queue is not from the downlink network interface to the uplink network interface, the following conditions are met for forwarding: 1) the port send buffer is not full; 2) the queue packet counter is greater than zero.
The rate control module 208 is configured by the CPU module 204, and generates tokens for packet buffer queues from all downstream network interfaces to upstream network interfaces at programmable intervals to control the rate of upstream forwarding.
The CPU module 304 is mainly responsible for protocol processing with the node server, configuration of the address table 306, and configuration of the code rate control module 308.
Ethernet protocol conversion gateway
As shown in fig. 4, the apparatus mainly includes a network interface module (a downlink network interface module 401 and an uplink network interface module 402), a switching engine module 403, a CPU module 404, a packet detection module 405, a rate control module 408, an address table 406, a packet buffer 407, a MAC adding module 409, and a MAC deleting module 410.
Wherein, the data packet coming from the downlink network interface module 401 enters the packet detection module 405; the packet detection module 405 detects whether the ethernet MAC DA, the ethernet MAC SA, the ethernet length or frame type, the video network destination address DA, the video network source address SA, the video network packet type, and the packet length of the packet meet the requirements, and if so, allocates a corresponding stream identifier (stream-id); then, the MAC deletion module 410 subtracts MAC DA, MAC SA, length or frame type (2byte) and enters the corresponding receiving buffer, otherwise, discards it;
the downlink network interface module 401 detects the sending buffer of the port, and if there is a packet, obtains the ethernet MAC DA of the corresponding terminal according to the video networking destination address DA of the packet, adds the ethernet MAC DA of the terminal, the MAC SA of the ethernet coordination gateway, and the ethernet length or frame type, and sends the packet.
The other modules in the ethernet protocol gateway function similarly to the access switch.
A terminal:
the system mainly comprises a network interface module, a service processing module and a CPU module; for example, the set-top box mainly comprises a network interface module, a video and audio coding and decoding engine module and a CPU module; the coding board mainly comprises a network interface module, a video and audio coding engine module and a CPU module; the memory mainly comprises a network interface module, a CPU module and a disk array module.
1.3 devices of the metropolitan area network part can be largely classified into 3 types: node server, node exchanger, metropolitan area server. The node switch mainly comprises a network interface module, a switching engine module and a CPU module; the metropolitan area server mainly comprises a network interface module, a switching engine module and a CPU module.
2. Video networking packet definition
2.1 Access network packet definition
The data packet of the access network mainly comprises the following parts: destination Address (DA), Source Address (SA), reserved bytes, payload (pdu), CRC.
As shown in the following table, the data packet of the access network mainly includes the following parts:
DA SA Reserved Payload CRC
wherein:
the Destination Address (DA) is composed of 8 bytes (byte), the first byte represents the type of the data packet (e.g. various protocol packets, multicast data packets, unicast data packets, etc.), there are at most 256 possibilities, the second byte to the sixth byte are metropolitan area network addresses, and the seventh byte and the eighth byte are access network addresses.
The Source Address (SA) is also composed of 8 bytes (byte), defined as the same as the Destination Address (DA).
The reserved byte consists of 2 bytes.
The payload part has different lengths according to types of different datagrams, and is 64 bytes if the type of the datagram is a variety of protocol packets, or is 1056 bytes if the type of the datagram is a unicast packet, but is not limited to the above 2 types.
The CRC consists of 4 bytes and is calculated in accordance with the standard ethernet CRC algorithm.
2.2 metropolitan area network packet definition
The topology of a metropolitan area network is a graph and there may be 2, or even more than 2, connections between two devices, i.e., more than 2 connections between a node switch and a node server, between a node switch and a node switch, and between a node switch and a node server. However, the metro network address of the metro network device is unique, and in order to accurately describe the connection relationship between the metro network devices, parameters are introduced in the embodiment of the present invention: a label to uniquely describe a metropolitan area network device.
In this specification, the definition of the Label is similar to that of a Label of Multi-Protocol Label switching (MPLS), and assuming that there are two connections between a device a and a device B, there are 2 labels for a packet from the device a to the device B, and 2 labels for a packet from the device B to the device a. The label is classified into an incoming label and an outgoing label, and assuming that the label (incoming label) of the packet entering the device a is 0x0000, the label (outgoing label) of the packet leaving the device a may become 0x 0001. The network access process of the metro network is a network access process under centralized control, that is, address allocation and label allocation of the metro network are both dominated by the metro server, and the node switch and the node server are both passively executed, which is different from label allocation of MPLS, and label allocation of MPLS is a result of mutual negotiation between the switch and the server.
As shown in the following table, the data packet of the metro network mainly includes the following parts:
DA SA Reserved label (R) Payload CRC
Namely Destination Address (DA), Source Address (SA), Reserved byte (Reserved), tag, payload (pdu), CRC. The format of the tag may be defined as follows: the tag is 32 bits with the upper 16 bits reserved and only the lower 16 bits used, and its position is between the reserved bytes and payload of the packet.
Based on the characteristics of the video network, one of the core concepts of the embodiment of the invention is provided, following the protocol of the video network, after the video network terminal acquires the original image to be processed, the gray value of each pixel point of the original image is adjusted to obtain a new image, and then the new image is analyzed to obtain an analysis result.
Referring to fig. 5, a flowchart illustrating steps of an embodiment of an image processing method according to the present invention is shown, where the method may be applied to a video network, and the video network may include a video network terminal, and the method may specifically include the following steps:
step 501, the video network terminal obtains an original image to be processed.
In a specific implementation, the video network terminal may be a Set Top Box (STB), commonly referred to as a Set Top Box or Set Top Box, which is a device for connecting a tv Set and an external signal source, and may convert a compressed digital signal into tv content and display the tv content on the tv Set.
Generally, the set-top box may be connected to a camera and a microphone for collecting multimedia data such as video data and audio data, and may also be connected to a television for playing multimedia data such as video data and audio data.
In the embodiment of the invention, the video network terminal can acquire the original image to be processed through the built-in camera or the external camera, and the original image can be an image containing the two-dimensional code.
Step 502, the video network terminal adjusts the gray value of each pixel point of the original image according to a preset initial threshold value to obtain a first image, and analyzes the first image.
In the embodiment of the invention, when the two-dimensional code identification is performed on the original image, the purpose of adjusting the gray value of each pixel point of the original image by the video network terminal is to convert the original image into a new image only containing black and white (in general, the two-dimensional code image is an image of black and white color), so that the influence of other colors on the two-dimensional code identification is reduced, and the efficiency of the two-dimensional code identification is improved.
In a preferred embodiment of the present invention, when the video network terminal adjusts the gray level of each pixel point of the original image according to the preset initial threshold, the gray level of each pixel point of the original image may be compared with the initial threshold, and the video network terminal sets the gray level of the pixel point of which the gray level is greater than or equal to the initial threshold in the original image as the preset first gray level, and sets the gray level of the pixel point of which the gray level is less than the initial threshold in the original image as the preset second gray level. For example, the preset initial threshold is 20, the preset first gray scale value is 0, and the preset second gray scale value is 255. The video network terminal compares the gray value of each pixel point in the original image with 20, the pixel points with the gray value being greater than or equal to 20 form a first pixel point cluster, the pixel points with the gray value being less than 20 form a second pixel point cluster, the gray value of each pixel point in the first pixel point cluster is set to be 0, and the gray value of each pixel point in the second pixel point cluster is set to be 255.
And the video network terminal takes a new image obtained after the gray value of each pixel point of the original image is adjusted according to the initial threshold value as a first image. The first image only contains pixels with a gray value of 0 and a gray value of 255.
After the gray value of each pixel point of the original image is adjusted by the video network terminal to obtain the first image, the first image is further analyzed, and the purpose is to identify the two-dimensional code in the first image.
In a preferred embodiment of the present invention, when the video network terminal parses the first image, the video network terminal may scan the two-dimensional code of the first image, scan whether the first image includes the two-dimensional code, and if the first image includes the two-dimensional code and can be scanned, the video network terminal may parse the two-dimensional code to obtain the two-dimensional code information; if the first image does not contain the two-dimensional code, or the two-dimensional code contained in the first image cannot be scanned out, the video network terminal does not analyze the two-dimensional code information.
Step 503, if the video network terminal analyzes the first image and does not obtain an analysis result, the video network terminal adjusts the gray value of each pixel point of the original image again according to the calculated adjustment threshold to obtain a second image, and analyzes the second image.
In the embodiment of the invention, the adjustment threshold can be obtained by calculating the initial threshold, the preset threshold amplification and the adjustment times of the gray value of each pixel point of the original image by the video network terminal.
In a preferred embodiment of the invention, the adjustment threshold is calculated by:
S′=S+T×(n-1);
wherein S' is an adjustment threshold, S is an initial threshold, T is a threshold amplification, n is an adjustment frequency, 255 is larger than S and larger than 0, T is larger than 1, n is larger than or equal to 2, and S and T are integers.
For example, if the initial threshold is 20, the threshold amplification is 5, and the number of adjustments is 2, the adjustment threshold is 20+5 × (2-1) × (25).
In a preferred embodiment of the present invention, when the terminal of the video network readjusts the gray level of each pixel of the original image according to the calculated adjustment threshold, the gray level of each pixel of the original image may be compared with the adjustment threshold, and the terminal of the video network sets the gray level of the pixel in the original image whose gray level is greater than or equal to the adjustment threshold as the first gray level and sets the gray level of the pixel in the original image whose gray level is less than the adjustment threshold as the second gray level.
It should be noted that the first gray scale value and the second gray scale value may be interchanged, for example, when the first gray scale value is 0, the second gray scale value is 255. When the first gray scale value is 255, the second gray scale value is 0.
And the video network terminal takes a new image obtained after the gray value of each pixel point of the original image is adjusted according to the calculated adjustment threshold value as a second image. The second image only contains pixels with a gray value of 0 and a gray value of 255.
After the gray value of each pixel point of the original image is adjusted by the video network terminal to obtain the second image, the second image is further analyzed, and the purpose is to identify the two-dimensional code in the second image.
In a preferred embodiment of the present invention, when the video network terminal parses the second image, the video network terminal may scan the two-dimensional code of the second image, scan whether the second image includes the two-dimensional code, and if the second image includes the two-dimensional code and can be scanned, the video network terminal may parse the two-dimensional code to obtain the two-dimensional code information; if the second image does not contain the two-dimensional code, or the two-dimensional code contained in the second image cannot be scanned out, the video network terminal does not analyze the two-dimensional code information.
It should be noted that, if the video network terminal analyzes the first image to obtain an analysis result, the process of the embodiment of the present invention is ended, and the subsequent steps of adjusting and analyzing are not required to be executed. If the video network terminal analyzes the second image to obtain an analysis result, the process of the embodiment of the invention is ended. If the video network terminal analyzes the second image and does not obtain an analysis result, the video network terminal calculates a new adjustment threshold, adjusts the gray value of each pixel point in the original image according to the new adjustment threshold to obtain a new image, analyzes the new image, and executes the process until the analysis result is obtained, and the process of the embodiment of the invention is finished.
The embodiment of the invention is applied to a video network, a video network terminal acquires an original image to be processed, the video network terminal adjusts the gray value of each pixel point of the original image according to a preset initial threshold value to obtain a first image, and the first image is analyzed; if the analysis result is not obtained, the video network terminal adjusts the gray value of each pixel point of the original image again according to the calculated adjustment threshold value to obtain a second image, analyzes the second image, and if the analysis result is not obtained yet, the video network terminal continues to adjust the gray value of each pixel point of the original image to obtain a new image and analyzes the new image until the analysis result is obtained through analysis.
According to the embodiment of the invention, by applying the characteristics of the video network, after the video network terminal acquires the original image, the gray value of each pixel point is adjusted to obtain a new image, and then the new image is analyzed to obtain an analysis result. By the embodiment of the invention, the identification efficiency of two-dimensional code identification on the original image is improved.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 6, a block diagram of an embodiment of an image processing apparatus according to the present invention is shown, the apparatus may be applied to a video network terminal in a video network, and the apparatus may specifically include the following modules:
the obtaining module 601 is configured to obtain an original image to be processed.
The adjusting module 602 is configured to adjust a gray value of each pixel of the original image according to a preset initial threshold, so as to obtain a first image.
An analyzing module 603 configured to analyze the first image.
The adjusting module 602 is further configured to, if the analyzing module 603 analyzes the first image and does not obtain an analysis result, adjust the gray value of each pixel point of the original image again according to the calculated adjustment threshold, so as to obtain a second image.
The analyzing module 603 is further configured to analyze the second image until a new image obtained by adjusting the gray value of each pixel of the original image by the adjusting module 602 is analyzed, so as to obtain an analysis result.
The adjustment threshold is obtained by calculating the initial threshold, the preset threshold amplitude and the adjustment times of the gray value of each pixel point of the original image by the video network terminal.
In a preferred embodiment of the present invention, the adjusting module 602 includes: a comparison module 6021, configured to compare the gray values of the pixels of the original image with the initial threshold values respectively; the setting module 6022 is configured to set a gray value of a pixel point in the original image, where the gray value is greater than or equal to the initial threshold, as a preset first gray value, and set a gray value of a pixel point in the original image, where the gray value is smaller than the initial threshold, as a preset second gray value.
In a preferred embodiment of the present invention, the comparing module 6021 is further configured to compare the gray level values of the pixels of the original image with the adjustment threshold respectively; the setting module 6022 is further configured to set the gray value of the pixel point of which the gray value is greater than or equal to the adjustment threshold in the original image as the first gray value, and set the gray value of the pixel point of which the gray value is smaller than the adjustment threshold in the original image as the second gray value.
In a preferred embodiment of the present invention, the first gray scale value is 0 and the second gray scale value is 255, or the first gray scale value is 255 and the second gray scale value is 0.
In a preferred embodiment of the present invention, the apparatus further comprises: a calculating module 604, configured to calculate an adjustment threshold by S ═ S + T × (n-1); wherein S' is an adjustment threshold, S is an initial threshold, T is a threshold amplification, n is an adjustment frequency, 255 is larger than S and larger than 0, T is larger than 1, n is larger than or equal to 2, and S and T are integers.
In a preferred embodiment of the present invention, the parsing module 603 is configured to perform two-dimensional code scanning on the first image; the parsing module 603 is further configured to perform two-dimensional code scanning on the second image.
In a preferred embodiment of the present invention, the analysis result is two-dimensional code information.
In a preferred embodiment of the invention, the adjustment threshold is greater than or equal to 20, and/or the adjustment threshold is less than or equal to 150; the threshold increase is equal to 5.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one of skill in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "include", "including" or any other variations thereof are intended to cover non-exclusive inclusion, so that a process, method, article, or terminal device including a series of elements includes not only those elements but also other elements not explicitly listed or inherent to such process, method, article, or terminal device. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The foregoing detailed description of an image processing method and an image processing apparatus according to the present invention has been presented, and the principles and embodiments of the present invention are explained herein by using specific examples, which are only used to help understand the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (14)

1. An image processing method is applied to a video network, wherein the video network comprises a video network terminal, and the method comprises the following steps:
the video network terminal acquires an original image to be processed;
the video network terminal adjusts the gray value of each pixel point of the original image according to a preset initial threshold value to obtain a first image, and analyzes the first image;
if the video network terminal analyzes the first image and does not obtain an analysis result, the video network terminal adjusts the gray value of each pixel point of the original image again according to the calculated adjustment threshold value to obtain a second image, and analyzes the second image until the analysis result is obtained;
the adjustment threshold is obtained by calculating the initial threshold, the preset threshold amplitude and the adjustment times of the gray value of each pixel point of the original image by the video network terminal;
the adjustment threshold is calculated in the following way:
S′=S+T×(n-1);
wherein S' is the adjustment threshold, S is the initial threshold, T is the threshold amplification, n is the adjustment times, 255 is larger than S and larger than 0, T is larger than 1, n is larger than or equal to 2, and S and T are integers.
2. The image processing method according to claim 1, wherein the adjusting, by the terminal of the video network, the gray value of each pixel point of the original image according to a preset initial threshold value comprises:
the video network terminal compares the gray value of each pixel point of the original image with the initial threshold value respectively;
and the video network terminal sets the gray value of the pixel point of which the gray value is greater than or equal to the initial threshold value in the original image as a preset first gray value, and sets the gray value of the pixel point of which the gray value is less than the initial threshold value in the original image as a preset second gray value.
3. The image processing method according to claim 2, wherein the adjusting the gray value of each pixel point of the original image again by the terminal of the video network according to the calculated adjusting threshold comprises:
the video network terminal compares the gray value of each pixel point of the original image with the adjustment threshold value respectively;
and the video network terminal sets the gray value of the pixel point of which the gray value is greater than or equal to the adjustment threshold value in the original image as the first gray value, and sets the gray value of the pixel point of which the gray value is less than the adjustment threshold value in the original image as the second gray value.
4. The image processing method according to claim 3, wherein the first grayscale value is 0 and the second grayscale value is 255, or wherein the first grayscale value is 255 and the second grayscale value is 0.
5. The image processing method according to claim 1,
the video network terminal analyzes the first image and comprises:
the video network terminal scans the first image by a two-dimensional code;
the video network terminal analyzes the second image, and the method comprises the following steps:
and the video network terminal scans the second image by the two-dimensional code.
6. The image processing method according to claim 1, wherein the analysis result is two-dimensional code information.
7. The image processing method according to any one of claims 1 to 6,
the adjustment threshold is greater than or equal to 20, and/or the adjustment threshold is less than or equal to 150;
the threshold increase is equal to 5.
8. An image processing apparatus, wherein the apparatus is applied to a video network terminal in a video network, the apparatus comprising:
the acquisition module is used for acquiring an original image to be processed;
the adjusting module is used for adjusting the gray value of each pixel point of the original image according to a preset initial threshold value to obtain a first image;
the analysis module is used for analyzing the first image;
the adjusting module is further configured to, if the analyzing module analyzes the first image and does not obtain an analysis result, adjust the gray value of each pixel point of the original image again according to the calculated adjusting threshold to obtain a second image;
the analysis module is further used for analyzing the second image until a new image obtained by adjusting the gray value of each pixel point of the original image by the adjustment module is analyzed to obtain an analysis result;
the adjustment threshold is obtained by calculating the initial threshold, the preset threshold amplitude and the adjustment times of the gray value of each pixel point of the original image by the video network terminal;
the device further comprises: a calculating module, configured to calculate the adjustment threshold by S' ═ S + T × (n-1);
wherein S' is the adjustment threshold, S is the initial threshold, T is the threshold amplification, n is the adjustment times, 255 is larger than S and larger than 0, T is larger than 1, n is larger than or equal to 2, and S and T are integers.
9. The image processing apparatus according to claim 8, wherein the adjusting module includes:
the comparison module is used for comparing the gray value of each pixel point of the original image with the initial threshold value respectively;
and the setting module is used for setting the gray value of the pixel point of which the gray value is greater than or equal to the initial threshold value in the original image as a preset first gray value and setting the gray value of the pixel point of which the gray value is less than the initial threshold value in the original image as a preset second gray value.
10. The image processing apparatus according to claim 9,
the comparison module is further configured to compare the gray value of each pixel point of the original image with the adjustment threshold respectively;
the setting module is further configured to set the gray scale value of the pixel point of which the gray scale value is greater than or equal to the adjustment threshold value in the original image as the first gray scale value, and set the gray scale value of the pixel point of which the gray scale value is less than the adjustment threshold value in the original image as the second gray scale value.
11. The apparatus according to claim 10, wherein the first grayscale value is 0 and the second grayscale value is 255, or wherein the first grayscale value is 255 and the second grayscale value is 0.
12. The image processing apparatus according to claim 8,
the analysis module is used for scanning the two-dimensional code of the first image;
the analysis module is further configured to perform two-dimensional code scanning on the second image.
13. The image processing apparatus according to claim 8, wherein the analysis result is two-dimensional code information.
14. The image processing apparatus according to any one of claims 8 to 13,
the adjustment threshold is greater than or equal to 20, and/or the adjustment threshold is less than or equal to 150;
the threshold increase is equal to 5.
CN201810662738.2A 2018-06-25 2018-06-25 Image processing method and device Active CN110633605B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810662738.2A CN110633605B (en) 2018-06-25 2018-06-25 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810662738.2A CN110633605B (en) 2018-06-25 2018-06-25 Image processing method and device

Publications (2)

Publication Number Publication Date
CN110633605A CN110633605A (en) 2019-12-31
CN110633605B true CN110633605B (en) 2022-05-06

Family

ID=68968114

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810662738.2A Active CN110633605B (en) 2018-06-25 2018-06-25 Image processing method and device

Country Status (1)

Country Link
CN (1) CN110633605B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512598A (en) * 2015-12-29 2016-04-20 暨南大学 Adaptive matching identification method of QR code image sampling
CN106803258A (en) * 2017-01-13 2017-06-06 深圳怡化电脑股份有限公司 A kind of image processing method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654017B (en) * 2015-12-25 2018-06-26 广州视源电子科技股份有限公司 Two-dimentional decoding transmission method and system
CN105827890A (en) * 2016-04-28 2016-08-03 乐视控股(北京)有限公司 Method and apparatus for scanning 2D codes
CN106626845B (en) * 2016-09-20 2019-01-11 深圳市裕同包装科技股份有限公司 The printing method of gray scale two dimensional code
CN106778995B (en) * 2016-11-25 2020-02-28 北京矩石科技有限公司 Artistic two-dimensional code generation method and device fused with image
CN107918748A (en) * 2017-10-27 2018-04-17 南京理工大学 A kind of multispectral two-dimension code recognition device and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512598A (en) * 2015-12-29 2016-04-20 暨南大学 Adaptive matching identification method of QR code image sampling
CN106803258A (en) * 2017-01-13 2017-06-06 深圳怡化电脑股份有限公司 A kind of image processing method and device

Also Published As

Publication number Publication date
CN110633605A (en) 2019-12-31

Similar Documents

Publication Publication Date Title
CN108632525B (en) Method and system for processing service
CN108737768B (en) Monitoring method and monitoring device based on monitoring system
CN109803111B (en) Method and device for watching video conference after meeting
CN108881815B (en) Video data transmission method and device
CN109495713B (en) Video conference control method and device based on video networking
CN109547163B (en) Method and device for controlling data transmission rate
CN108881948B (en) Method and system for video inspection network polling monitoring video
CN111147859A (en) Video processing method and device
CN108574816B (en) Video networking terminal and communication method and device based on video networking terminal
CN110149305B (en) Video network-based multi-party audio and video playing method and transfer server
CN110113564B (en) Data acquisition method and video networking system
CN110049268B (en) Video telephone connection method and device
CN110769179B (en) Audio and video data stream processing method and system
CN109302384B (en) Data processing method and system
CN109743284B (en) Video processing method and system based on video network
CN108965930B (en) Video data processing method and device
CN108965783B (en) Video data processing method and video network recording and playing terminal
CN110769297A (en) Audio and video data processing method and system
CN110086773B (en) Audio and video data processing method and system
CN109842630B (en) Video processing method and device
CN110049069B (en) Data acquisition method and device
CN110401808B (en) Conference control method and device
CN110633592B (en) Image processing method and device
CN110113565B (en) Data processing method and intelligent analysis equipment based on video network
CN110233872B (en) Data transmission method based on video network and video network terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant