CN113610205A - Two-dimensional code generation method and device based on machine vision and storage medium - Google Patents

Two-dimensional code generation method and device based on machine vision and storage medium Download PDF

Info

Publication number
CN113610205A
CN113610205A CN202110800633.0A CN202110800633A CN113610205A CN 113610205 A CN113610205 A CN 113610205A CN 202110800633 A CN202110800633 A CN 202110800633A CN 113610205 A CN113610205 A CN 113610205A
Authority
CN
China
Prior art keywords
detection
image
dimensional code
straight line
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110800633.0A
Other languages
Chinese (zh)
Other versions
CN113610205B (en
Inventor
黄捷汶
黄耿斌
常晶舒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yuxi Technology Co ltd
Original Assignee
Shenzhen Yuxi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yuxi Technology Co ltd filed Critical Shenzhen Yuxi Technology Co ltd
Priority to CN202110800633.0A priority Critical patent/CN113610205B/en
Publication of CN113610205A publication Critical patent/CN113610205A/en
Application granted granted Critical
Publication of CN113610205B publication Critical patent/CN113610205B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K19/00Record carriers for use with machines and with at least a part designed to carry digital markings
    • G06K19/06Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
    • G06K19/06009Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
    • G06K19/06037Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking multi-dimensional coding
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Abstract

The invention discloses a two-dimensional code generation method, a device and a storage medium based on machine vision, wherein the method comprises the steps of obtaining identity information of a user and a detection video, wherein the detection video is used for reflecting a detection result of a detection device, and the detection device is only used by the user; determining detection information corresponding to the user according to the detection video; and generating a two-dimensional code corresponding to the user according to the identity information and the detection information. The problem of among the prior art health code's state can only be with the testing result synchronous update of hospital, can't be with the testing result synchronous update of disease self-test box is solved.

Description

Two-dimensional code generation method and device based on machine vision and storage medium
Technical Field
The invention relates to the field of artificial intelligence, in particular to a two-dimensional code generation method and device based on machine vision and a storage medium.
Background
The health status of people can only be determined by detection in hospitals in the past, and with the development of science and technology, a plurality of convenient disease self-testing boxes are available for people, so that the disease self-testing boxes can realize that users can carry out disease detection at home by themselves, and then whether the users need to go to the hospitals for detection or not is determined according to the results of the disease self-testing boxes, and a large amount of time cost is saved for the users. The health code released in China in the near future can primarily screen out healthy people, is convenient to pass, and reduces epidemic prevention pressure. However, at present, the state of the health code can only be updated synchronously with the detection result of the hospital, and cannot be updated synchronously with the detection result of the disease self-testing box, so that the user can only go to the hospital for detection, otherwise, the health code cannot be updated.
Thus, there is still a need for improvement and development of the prior art.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a two-dimensional code generation method, device and storage medium based on machine vision, aiming at solving the problem that the state of a health code in the prior art can only be updated synchronously with the detection result of a hospital and cannot be updated synchronously with the detection result of a disease self-testing box.
The technical scheme adopted by the invention for solving the problems is as follows:
in a first aspect, an embodiment of the present invention provides a two-dimensional code generation method based on machine vision, where the method includes:
acquiring identity information and a detection video of a user, wherein the detection video is used for reflecting a detection result of a detection device, and the detection device is only used by the user;
determining detection information corresponding to the user according to the detection video;
and generating a two-dimensional code corresponding to the user according to the identity information and the detection information.
In one embodiment, the determining, according to the detection video, detection information corresponding to the user includes:
obtaining a local image according to the detection video;
rotating the local image to obtain a rotated image, and vertically cutting the rotated image to obtain a corrected image;
intercepting a target image from the corrected image;
and carrying out image recognition on the target image and determining the detection information.
In one embodiment, the obtaining a local image according to the detection video includes:
generating a video frame image according to the detection video;
inputting the video frame image into a pre-trained image detection model to obtain position information corresponding to the detection device;
and clipping the video frame image according to the position information to obtain a local image.
In one embodiment, the rotating the partial image to obtain a rotated image and vertically cropping the rotated image to obtain a corrected image includes:
determining a first straight line and a second straight line according to the local image, wherein the first straight line is used for reflecting the left boundary of the detection device, and the second straight line is used for reflecting the right boundary of the detection device;
determining a rotation angle according to the first straight line and the second straight line, and rotating the local image according to the rotation angle to obtain a rotated image;
acquiring a first perpendicular line corresponding to the first straight line and a second perpendicular line corresponding to the second straight line in the rotation image;
and vertically clipping the rotated image according to the first vertical line and the second vertical line to obtain the corrected image.
In one embodiment, the determining the first line and the second line according to the local image includes:
carrying out edge detection on the local image through an edge detection operator to obtain a plurality of edge points in the local image;
determining a plurality of target edge points from the edge points, and determining a first straight line and a second straight line according to the target edge points.
In one embodiment, the determining a plurality of target edge points from the plurality of edge points, and determining a first line and a second line according to the plurality of target edge points includes:
carrying out Hough transform on the edge points to obtain a plurality of transform points which are in one-to-one correspondence with the edge points;
and obtaining statistics values corresponding to the plurality of transformation points respectively, and performing non-maximum suppression on the plurality of transformation points according to the statistics values to obtain a plurality of target edge points.
In one embodiment, the intercepting a target image from the corrected image comprises:
acquiring parameter information of the detection device, and determining a zoom image corresponding to the correction image according to the parameter information;
acquiring a standard detection device image, and performing template matching on the standard detection device image and the zoomed image to obtain a target area in the zoomed image;
and intercepting the target area from the zoomed image to obtain the target image.
In an embodiment, the generating a two-dimensional code corresponding to the user according to the identity information and the detection information includes:
sending the identity information and the detection information to a background block chain for storage;
acquiring a hash value generated by the background block chain based on the identity information and the detection information;
inquiring a target block according to the hash value, and acquiring the detection information according to the target block;
and generating the two-dimensional code according to the detection information.
In a second aspect, an embodiment of the present invention further provides a two-dimensional code generating apparatus based on machine vision, where the apparatus includes:
the device comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring identity information of a user and a detection video, the detection video is used for reflecting a detection result of a detection device, and the detection device is only used by the user;
the determining module is used for determining the detection information corresponding to the user according to the detection video;
and the generating module is used for generating the two-dimensional code corresponding to the user according to the identity information and the detection information.
In a third aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a plurality of instructions are stored, where the instructions are adapted to be loaded and executed by a processor to implement any of the steps of the machine-vision-based two-dimensional code generation method described above.
The invention has the beneficial effects that: the embodiment of the invention obtains the identity information of a user and a detection video, wherein the detection video is used for reflecting the detection result of a detection device, and the detection device is only used by the user; determining detection information corresponding to the user according to the detection video; and generating a two-dimensional code corresponding to the user according to the identity information and the detection information. The problem of among the prior art health code's state can only be with the testing result synchronous update of hospital, can't be with the testing result synchronous update of disease self-test box is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a two-dimensional code generation method based on machine vision according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a 2-class training sample provided in an embodiment of the present invention.
Fig. 3 is a schematic diagram of detecting a video according to an embodiment of the present invention.
Fig. 4 is a schematic flowchart of acquiring a target image according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of the operation of the neural network model according to the embodiment of the present invention.
Fig. 6 is a schematic diagram of the operation of the blockchain according to the embodiment of the present invention.
Fig. 7 is a connection diagram of internal modules of a two-dimensional code generation device based on machine vision according to an embodiment of the present invention.
Fig. 8 is a functional block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and back … …) are involved in the embodiment of the present invention, the directional indications are only used to explain the relative positional relationship between the components, the movement situation, and the like in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indications are changed accordingly.
The health status of people can only be determined by detection in hospitals in the past, and with the development of science and technology, a plurality of convenient disease self-testing boxes are available for people, so that the disease self-testing boxes can realize that users can carry out disease detection at home by themselves, and then whether the users need to go to the hospitals for detection or not is determined according to the results of the disease self-testing boxes, and a large amount of time cost is saved for the users. The health code released in China in the near future can primarily screen out healthy people, is convenient to pass, and reduces epidemic prevention pressure. However, at present, the state of the health code can only be updated synchronously with the detection result of the hospital, and cannot be updated synchronously with the detection result of the disease self-testing box, so that the user can only go to the hospital for detection, otherwise, the health code cannot be updated.
In view of the above drawbacks of the prior art, the present invention provides a two-dimensional code generation method based on machine vision, which includes obtaining identity information of a user and a detection video, where the detection video is used to reflect a detection result of a detection device, and the detection device is used only by the user; determining detection information corresponding to the user according to the detection video; and generating a two-dimensional code corresponding to the user according to the identity information and the detection information. The problem of among the prior art health code's state can only be with the testing result synchronous update of hospital, can't be with the testing result synchronous update of disease self-test box is solved.
As shown in fig. 1, the method comprises the steps of:
step S100, identity information of a user and a detection video are obtained, wherein the detection video is used for reflecting a detection result of a detection device, and the detection device is only used by the user.
Specifically, since the health code is a two-dimensional code subjected to real-name authentication, in order to enable the user to update the health code of the user based on the detection result of the self-testing kit, in this embodiment, the identity information of the user and the detection video are required to be acquired, and the detection video includes content of shooting the detection result generated by the detection device after the user uses the detection device for detection, so that the detection video can reflect the latest detection result of the user.
In one implementation, the embodiment may provide an application program for uploading the identity information of the user and detecting the video. In practical application, a user uploads identity information by scanning a two-dimensional code on a disease self-testing box, opens a shooting function of the application program, records the process of using the disease self-testing and detecting through the whole shooting function, ensures that no third person assists in counterfeiting a self-testing result, and automatically uploads a detection video after shooting is finished.
As shown in fig. 1, the method further comprises the steps of:
and S200, determining detection information corresponding to the user according to the detection video.
Specifically, since the detection result generated after the detection device is used by the user is captured in the detection video, the detection information corresponding to the user can be obtained by identifying and analyzing the detection video, and the detection information can be used as reference information for determining whether the user is ill or not.
In one implementation, the step S200 specifically includes the following steps:
step S201, obtaining a local image according to the detection video;
step S202, rotating the local image to obtain a rotated image, and vertically cutting the rotated image to obtain a corrected image;
step S203, intercepting a target image from the corrected image;
and step S204, carrying out image recognition on the target image and determining the detection information.
Since the detection video includes the image of the detection device, the present embodiment first cuts out the local image related to the detection device from the detection video, and since the local image includes not only the detection device but also many redundant images, such as the image related to the captured scene, etc., which have no effect on the detection information for identifying the user, the present embodiment needs to delete these redundant background images from the local image. Specifically, the present embodiment first rotates the partial image, so that the detection device in the partial image is located in the vertical direction, and a rotated image is obtained. Then, the rotated image is vertically cropped, and redundant backgrounds on the left and right sides of the detection device in the rotated image are cropped (as shown in fig. 4), so that a corrected image is obtained. Since only a part of the area of the detection device is usually used for displaying the detection result, the target image is cut out from the corrected image (as shown in fig. 4), and then the target image is subjected to image recognition to obtain the detection information of the user.
In an implementation manner, the step S201 specifically includes the following steps:
step S2011, generating a video frame image according to the detection video;
step S2012, inputting the video frame image into a pre-trained image detection model to obtain position information corresponding to the detection device;
and S2013, clipping the video frame image according to the position information to obtain a local image.
Specifically, the present embodiment trains an image detection model in advance, and inputs a video frame image generated based on a detection video into the image detection model, so as to obtain the position information of the detection device. Based on the position information, the video frame image can be cropped to eliminate redundant information in the video frame image, so as to obtain a local image, wherein the local image includes the detection device.
In order to ensure that no third party assists in falsifying the self-test result, the embodiment may also train a face detection model in advance, and require that the user must shoot the face of the user himself in the detection video shot by the user (as shown in fig. 3). After the detection video is obtained, inputting a video frame image generated based on the detection video into the face detection model to obtain the face features in the video frame image, determining detection person information according to the face features, matching the detection person information with the identity information of the user, and when the matching is successful, indicating that the user is a real detection person.
In an implementation manner, the step S202 specifically includes the following steps:
step S2021, determining a first straight line and a second straight line according to the local image, where the first straight line is used to reflect a left boundary of the detection apparatus, and the second straight line is used to reflect a right boundary of the detection apparatus;
step S2022, determining a rotation angle according to the first straight line and the second straight line, and rotating the local image according to the rotation angle to obtain a rotated image;
step S2023, acquiring a first perpendicular line corresponding to the first straight line and a second perpendicular line corresponding to the second straight line in the rotated image;
step S2024, performing vertical cropping on the rotated image according to the first vertical line and the second vertical line to obtain the corrected image.
Specifically, the present embodiment first determines two straight lines, i.e., a first straight line and a second straight line, reflecting the left and right boundaries of the detection device in the partial image. And then calculating an included angle between the first straight line and the second straight line relative to the y axis, wherein the included angle is a rotation angle, and rotating by taking the middle point of the local image as a rotation center according to the rotation angle to obtain a rotation image. Since the first straight line and the second straight line are also rotated along with the partial image, the first straight line is the first perpendicular line and the second straight line is the second perpendicular line in the rotated image. Because the first vertical line and the second vertical line can respectively reflect the left boundary and the right boundary of the detection device, the redundant background on the left side of the detection device in the rotation image can be deleted according to the first vertical line, and the redundant background on the right side of the detection device in the rotation image can be deleted according to the second vertical line, so that the correction image is obtained.
For example, assume that the equation for the first line is: x 0.001346 y +112, and the equation for the second line: x 0.001265 y +401, since the left and right boundaries are a pair of parallel sides, k1 ≈ k 2. The first straight line and the second straight line form an included angle theta relative to the y axis, wherein the calculation formula of theta is as follows: θ is arctan ((k1+ k2)/2) is arctan (0.0013055) is 0.0747996 °. The partial image is rotated counterclockwise by θ equal to 0.0747996 ° with the middle point of the partial image as the rotation center, so as to obtain a rotated image, where x equal to 111 and x equal to 400 are the first and second vertical lines in the rotated image. If the height of the rotated image is 1715, a (111, 0) (400, 1715) area is cut out from the rotated image, and the left and right boundaries of the detection device are included in the area.
In an implementation manner, the determining a first straight line and a second straight line according to the local image specifically includes the following steps:
carrying out edge detection on the local image through an edge detection operator to obtain a plurality of edge points in the local image;
determining a plurality of target edge points from the edge points, and determining a first straight line and a second straight line according to the target edge points.
Specifically, the edge detection operator used in this embodiment is a canny operator, which is an optimization operator having multiple stages of filtering, enhancing, and detecting. In this embodiment, a local image is obtained by smoothing through a canny operator, noise of the local image is removed, a plurality of edge points are obtained, target edge points are screened from the edge points, and the target edge points are substituted into a preset first linear equation and a preset second linear equation, so that a first straight line and a second straight line are obtained.
In one implementation, the determining a plurality of target edge points from the plurality of edge points, and determining a first straight line and a second straight line according to the plurality of target edge points includes:
carrying out Hough transform on the edge points to obtain a plurality of transform points which are in one-to-one correspondence with the edge points;
and obtaining statistics values corresponding to the plurality of transformation points respectively, and performing non-maximum suppression on the plurality of transformation points according to the statistics values to obtain a plurality of target edge points.
Specifically, a plurality of edge points are obtained after edge detection is performed on the local image through a cannny operator, and then hough transform (hough transform) is adopted to convert the edge points into a kb parameter space, so as to obtain a plurality of transform points of the edge points in the kb parameter space, wherein the transform points are in one-to-one correspondence with the edge points. And then performing non-maximum suppression according to the statistic value of each transformation point in the kb parameter space to obtain a plurality of target edge points.
The principle of Hough transform is as follows: a line can be represented in the form of y-kx + b, kb being a parameter of this line. Assuming that the minimum unit of angle of one line is 1 °, 180 lines may be passed through 1 point. A straight line y of length 100 x +3 and an angle of 45. For 100 points of the straight line, each point is counted from 0 ° to 179 °, and there are 100 × 180 statistics, and the statistics are stored in a two-dimensional matrix, and finally 100 statistics are found to fall on the point of the two-dimensional matrix (k ═ tan45 °, b ═ 3), while at other points, the statistics are much smaller than 100 or even 0. Such a matrix for recording statistics is called kb parameter space. The value v for each point (k, b) in the matrix indicates that there is a straight line y of length v, kx + b in the original. If all the straight lines with the length of more than 50 need to be found in the image, all the points with the v >50 are searched in the kb parameter space, and the corresponding coordinates of each point are the parameters of each straight line.
The detailed steps of the hough transform are as follows:
(1) establishing a two-dimensional matrix as a kb parameter space, and initializing all elements to be 0;
(2) searching black points (edge points) in the edge detection image to obtain coordinates (x, y);
(3) let θ traverse from 0 ° to 179 °, substitute (x, y) into y ═ tan θ x + b, resulting in b ═ int (y-tan θ x), int representing the rounding. 180 sets (theta, b) can be obtained;
(4) finding the corresponding coordinates of the 180 sets (θ, b) in the kb parameter space, the value at each coordinate being + 1;
(5) and (3) searching all black points, and performing non-maximum suppression after finishing preliminary statistics: traversing each point in the kb parameter space, taking the value of 8 neighborhoods of each point for comparison, and setting the value of the current point to be 0 if the value of the current point is not the maximum value in the 8 neighborhoods; if the maximum value is obtained, reserving;
(6) at this time, the target edge point is further searched.
In one implementation, the number of target edge points consists of a pair of the transformation points, wherein the pair of transformation points satisfies the following condition:
1) the k value corresponding to the pair of transformation points is between-pi/4 and pi/4;
2) 1/3, the absolute value of the difference between the b values corresponding to the pair of transformed points is greater than the width of the partial image;
3) the absolute value of the difference between the b values corresponding to the pair of transformed points is smaller than the absolute value of the difference between the b values corresponding to all other transformed points in the parameter space.
In an implementation manner, the step S203 specifically includes the following steps:
step S2031, obtaining parameter information of the detection device, and determining a zoom image corresponding to the correction image according to the parameter information;
step S2032, acquiring a standard detection device image, and performing template matching on the standard detection device image and the zoom image to obtain a target area in the zoom image;
and S2033, intercepting the target area from the zoomed image to obtain the target image.
Specifically, template matching is a matching method commonly used in image processing, and is used to study where a pattern of a specific object is located in an entire pattern, and further identify the object. In order to implement template matching on the corrected image, the embodiment needs to perform scaling processing on the corrected image first, and since the parameter information of the detection device can reflect the standard size of the detection device, the embodiment uses the parameter information to determine the scaling coefficient used in the scaling processing, so as to obtain the scaled image corresponding to the corrected image. And then, performing template matching on the zoomed image through a pre-stored standard detection device image to obtain a target area, wherein the target area is an area used for displaying a detection result by the detection device, and therefore, the target area is intercepted from the zoomed image to obtain a target image (as shown in fig. 4).
In one implementation, the target region may be determined using a correlation matching operation. Specifically, the standard detection device image and each area in the scaled image are used for correlation matching operation, and when the correlation of a certain area obtains the maximum value, the area is the target area. Wherein, the correlation matching formula is as follows:
Figure BDA0003164615260000121
where T is the standard inspection apparatus image and I is the scaled image.
For example, the width of the corrected image is calculated to be (400-. Template matching is then performed using the scaled image and a standard detection device image, resulting in a vertical start point coordinate of the detection device in the scaled image being (0,129). If the area for displaying the detection result in the standard detection device image is (152, 1000) (398, 1706), the target area in the scaled image is (152, 1129) (398, 1835), and the target area is cut out to obtain the target image.
In an implementation manner, the step S204 specifically includes the following steps:
step S2041, inputting the target image into a neural network model which is trained in advance to obtain a classification result corresponding to the target image;
step S2042, determining the detection information according to the classification result.
Specifically, the present embodiment trains a neural network model in advance, and the training process of the neural network model is as follows: a plurality of detection devices are prepared in advance, the plurality of detection devices respectively display different detection results, the plurality of detection devices are photographed under various environments, and a plurality of training images are obtained, wherein the plurality of training images are 2 classification samples, as shown in fig. 2, the left side is a negative training sample, and the right side is a positive training sample. And training and classifying the initial neural network model through a plurality of training images to obtain the trained neural network model. As shown in fig. 5, after the target image is input into the trained neural network model, the neural network model can perform inference on the target image to obtain a classification result, such as negative classification or positive classification, corresponding to the target image, and then generate the detection information of the user according to the classification result.
As shown in fig. 1, the method further comprises the steps of:
and step S300, generating a two-dimensional code corresponding to the user according to the identity information and the detection information.
Specifically, in this embodiment, after associating the user identity information with the corresponding detection information, the two-dimensional code dedicated to the user may be generated, and since the two-dimensional code may reflect the detection information of the user, the classification of the crowd corresponding to the user may be determined according to the two-dimensional code, and then the release rule corresponding to the user may be determined. For example, a red two-dimensional code indicates that the user is a high-level crowd and needs to be isolated; the green two-dimensional code shows that the user is a healthy crowd and can be directly released.
In one implementation, the step S300 specifically includes the following steps:
step S301, sending the identity information and the detection information to a background block chain for storage;
step S302, a hash value generated by the background blockchain based on the identity information and the detection information is obtained;
step S303, inquiring a target block according to the hash value, and acquiring the detection information according to the target block;
and step S304, generating the two-dimensional code according to the detection information.
In order to ensure the universality of the two-dimensional code, the embodiment adopts a block chain technology to publish and store the detection information of the user. Specifically, after the user sends the identity information and the detection information to the background block chain, the background block chain generates a new block, i.e., a target block, in the self-test record of the user according to the identity information. And generating a hash value corresponding to the target block through hash operation, wherein the hash value is an index value corresponding to the target block and is used for searching the target block subsequently. And then storing the detection information into the target block. After the storage is finished, the related App of the user terminal can receive the hash value of the target block, the App background can inquire the target block according to the hash value, acquire the detection information stored in the target block, and generate the two-dimensional code corresponding to the user according to the detection information.
In one implementation, as shown in fig. 6, the method for generating a new chunk by the background blockchain is as follows:
and when the background server is started, a port is opened to serve http, and 2 initial empty lists are created, wherein one list is used for storing a block chain, and the other list is used for storing user self-test records. Each block contains an index, timestamp, records, certificate, and hash. The hashcode is used to identify the previous block.
When the user uploads the identity information and the detection information, a new record is added to the self-test record corresponding to the user, and the index of the next record is returned. Each record represents a block. Each time a new chunk is generated, a PoW workload proof, i.e., the hash value of the chunk, is created for the chunk.
Specifically, the system tries to find a number in the current block, the number and the number obtained by analysis in the previous block are subjected to hash operation to generate a new hash code, the code of the previous N bits is required to conform to a specific sequence, and the number meeting the condition is the proof of the PoW workload. In one implementation, setting the new hashcode must satisfy the top 4-bit code coincidence sequence as rtus.
In an implementation manner, the embodiment uses a flash to build a background server, and uses an HTTP request to interact with a background Blockchain on the web as an independent node in a Blockchain network. And sets the address of the independent node as the recipient of the block. Thereby completing the building of the blockchain web project. In one implementation, the node address is penetrated by an intranet by third-party software, and the intranet address is set in the application program after being mapped into a public network address.
Based on the above embodiment, the present invention further provides a two-dimensional code generating device based on machine vision, as shown in fig. 7, the device includes:
the system comprises an acquisition module 01, a detection module and a display module, wherein the acquisition module is used for acquiring identity information of a user and a detection video, the detection video is used for reflecting a detection result of a detection device, and the detection device is only used by the user;
a determining module 02, configured to determine, according to the detection video, detection information corresponding to the user;
the generating module 03 is configured to generate a two-dimensional code corresponding to the user according to the identity information and the detection information.
Based on the above embodiments, the present invention further provides a terminal, and a schematic block diagram thereof may be as shown in fig. 8. The terminal comprises a processor, a memory, a network interface and a display screen which are connected through a system bus. Wherein the processor of the terminal is configured to provide computing and control capabilities. The memory of the terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the terminal is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a machine vision-based two-dimensional code generation method. The display screen of the terminal can be a liquid crystal display screen or an electronic ink display screen.
It will be understood by those skilled in the art that the block diagram of fig. 8 is a block diagram of only a portion of the structure associated with the inventive arrangements and is not intended to limit the terminals to which the inventive arrangements may be applied, and that a particular terminal may include more or less components than those shown, or may have some components combined, or may have a different arrangement of components.
In one implementation, one or more programs are stored in a memory of the terminal and configured to be executed by one or more processors include instructions for performing a machine-vision based two-dimensional code generation method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases, or other media used in embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In summary, the present invention discloses a two-dimensional code generation method, device and storage medium based on machine vision, the method obtains identity information of a user and a detection video, wherein the detection video is used for reflecting a detection result of a detection device, and the detection device is used only by the user; determining detection information corresponding to the user according to the detection video; and generating a two-dimensional code corresponding to the user according to the identity information and the detection information. The problem of among the prior art health code's state can only be with the testing result synchronous update of hospital, can't be with the testing result synchronous update of disease self-test box is solved.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (10)

1. A two-dimensional code generation method based on machine vision is characterized by comprising the following steps:
acquiring identity information and a detection video of a user, wherein the detection video is used for reflecting a detection result of a detection device, and the detection device is only used by the user;
determining detection information corresponding to the user according to the detection video;
and generating a two-dimensional code corresponding to the user according to the identity information and the detection information.
2. The method for generating two-dimensional code based on machine vision according to claim 1, wherein said determining detection information corresponding to the user according to the detection video comprises:
obtaining a local image according to the detection video;
rotating the local image to obtain a rotated image, and vertically cutting the rotated image to obtain a corrected image;
intercepting a target image from the corrected image;
and carrying out image recognition on the target image and determining the detection information.
3. The method for generating two-dimensional code based on machine vision according to claim 2, wherein said obtaining a local image according to the detection video comprises:
generating a video frame image according to the detection video;
inputting the video frame image into a pre-trained image detection model to obtain position information corresponding to the detection device;
and clipping the video frame image according to the position information to obtain a local image.
4. The method for generating two-dimensional code based on machine vision according to claim 3, wherein said rotating said partial image to obtain a rotated image, and vertically cropping said rotated image to obtain a corrected image comprises:
determining a first straight line and a second straight line according to the local image, wherein the first straight line is used for reflecting the left boundary of the detection device, and the second straight line is used for reflecting the right boundary of the detection device;
determining a rotation angle according to the first straight line and the second straight line, and rotating the local image according to the rotation angle to obtain a rotated image;
acquiring a first perpendicular line corresponding to the first straight line and a second perpendicular line corresponding to the second straight line in the rotation image;
and vertically clipping the rotated image according to the first vertical line and the second vertical line to obtain the corrected image.
5. The method for generating two-dimensional code based on machine vision according to claim 4, wherein said determining a first straight line and a second straight line according to the local image comprises:
carrying out edge detection on the local image through an edge detection operator to obtain a plurality of edge points in the local image;
determining a plurality of target edge points from the edge points, and determining a first straight line and a second straight line according to the target edge points.
6. The method for generating two-dimensional code based on machine vision according to claim 5, wherein said determining a plurality of target edge points from a plurality of said edge points, and determining a first straight line and a second straight line according to a plurality of said target edge points comprises:
carrying out Hough transform on the edge points to obtain a plurality of transform points which are in one-to-one correspondence with the edge points;
and obtaining statistics values corresponding to the plurality of transformation points respectively, and performing non-maximum suppression on the plurality of transformation points according to the statistics values to obtain a plurality of target edge points.
7. The method for generating two-dimensional code based on machine vision according to claim 2, wherein said intercepting a target image from the corrected image comprises:
acquiring parameter information of the detection device, and determining a zoom image corresponding to the correction image according to the parameter information;
acquiring a standard detection device image, and performing template matching on the standard detection device image and the zoomed image to obtain a target area in the zoomed image;
and intercepting the target area from the zoomed image to obtain the target image.
8. The method for generating the two-dimensional code based on the machine vision according to claim 1, wherein the generating the two-dimensional code corresponding to the user according to the identity information and the detection information comprises:
sending the identity information and the detection information to a background block chain for storage;
acquiring a hash value generated by the background block chain based on the identity information and the detection information;
inquiring a target block according to the hash value, and acquiring the detection information according to the target block;
and generating the two-dimensional code according to the detection information.
9. A two-dimensional code generation device based on machine vision, the device comprising:
the device comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring identity information of a user and a detection video, the detection video is used for reflecting a detection result of a detection device, and the detection device is only used by the user;
the determining module is used for determining the detection information corresponding to the user according to the detection video;
and the generating module is used for generating the two-dimensional code corresponding to the user according to the identity information and the detection information.
10. A computer-readable storage medium having stored thereon a plurality of instructions adapted to be loaded and executed by a processor to implement the steps of the machine-vision based two-dimensional code generation method according to any one of claims 1 to 8.
CN202110800633.0A 2021-07-15 2021-07-15 Two-dimensional code generation method and device based on machine vision and storage medium Active CN113610205B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110800633.0A CN113610205B (en) 2021-07-15 2021-07-15 Two-dimensional code generation method and device based on machine vision and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110800633.0A CN113610205B (en) 2021-07-15 2021-07-15 Two-dimensional code generation method and device based on machine vision and storage medium

Publications (2)

Publication Number Publication Date
CN113610205A true CN113610205A (en) 2021-11-05
CN113610205B CN113610205B (en) 2022-12-27

Family

ID=78337620

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110800633.0A Active CN113610205B (en) 2021-07-15 2021-07-15 Two-dimensional code generation method and device based on machine vision and storage medium

Country Status (1)

Country Link
CN (1) CN113610205B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100110193A1 (en) * 2007-01-12 2010-05-06 Sachio Kobayashi Lane recognition device, vehicle, lane recognition method, and lane recognition program
US20130022280A1 (en) * 2011-07-19 2013-01-24 Fuji Xerox Co., Ltd. Methods for improving image search in large-scale databases
CN105455789A (en) * 2014-09-09 2016-04-06 曲刚 Unattended self-help health information collecting system and method based on network technique
CN106156517A (en) * 2016-07-22 2016-11-23 广东工业大学 The self-service automatic checkout system in a kind of human body basic disease community
CN106338596A (en) * 2016-08-24 2017-01-18 四川长虹通信科技有限公司 Health monitoring method, health monitoring apparatus, and electronic equipment
CN206420781U (en) * 2016-12-22 2017-08-18 中国移动通信有限公司研究院 A kind of terminal, server and health detecting system
CN107731307A (en) * 2017-10-13 2018-02-23 安徽师范大学 A kind of physical health self-measuring system
CN108305261A (en) * 2017-08-11 2018-07-20 腾讯科技(深圳)有限公司 Picture segmentation method, apparatus, storage medium and computer equipment
CN109406506A (en) * 2018-12-06 2019-03-01 北京腾康汇医科技有限公司 A kind of shared self-rated health terminal and test method
CN109461482A (en) * 2018-05-29 2019-03-12 平安医疗健康管理股份有限公司 Health plan generation method, device, computer equipment and storage medium
CN109870448A (en) * 2019-02-09 2019-06-11 智锐达仪器科技南通有限公司 A kind of colloidal gold test paper card detecting instrument and control method is detected accordingly
CN110286124A (en) * 2018-03-14 2019-09-27 浙江大学山东工业技术研究院 Refractory brick measuring system based on machine vision
CN110320358A (en) * 2018-03-30 2019-10-11 深圳市贝沃德克生物技术研究院有限公司 Diabetic nephropathy biomarker detection device and method
CN110954555A (en) * 2019-12-26 2020-04-03 宋佳 WDT 3D vision detection system
CN111613333A (en) * 2020-05-29 2020-09-01 惠州Tcl移动通信有限公司 Self-service health detection method and device, storage medium and mobile terminal
CN111899830A (en) * 2020-08-06 2020-11-06 苏州贝福加智能系统有限公司 Non-contact intelligent health detection system, detection method and detection device
CN111916203A (en) * 2020-06-18 2020-11-10 北京百度网讯科技有限公司 Health detection method and device, electronic equipment and storage medium
CN112259238A (en) * 2020-10-20 2021-01-22 平安科技(深圳)有限公司 Electronic device, disease type detection method, apparatus, and medium
CN112890767A (en) * 2020-12-30 2021-06-04 浙江大学 Automatic detection device and method for health state of mouth, hands and feet

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100110193A1 (en) * 2007-01-12 2010-05-06 Sachio Kobayashi Lane recognition device, vehicle, lane recognition method, and lane recognition program
US20130022280A1 (en) * 2011-07-19 2013-01-24 Fuji Xerox Co., Ltd. Methods for improving image search in large-scale databases
CN105455789A (en) * 2014-09-09 2016-04-06 曲刚 Unattended self-help health information collecting system and method based on network technique
CN106156517A (en) * 2016-07-22 2016-11-23 广东工业大学 The self-service automatic checkout system in a kind of human body basic disease community
CN106338596A (en) * 2016-08-24 2017-01-18 四川长虹通信科技有限公司 Health monitoring method, health monitoring apparatus, and electronic equipment
CN206420781U (en) * 2016-12-22 2017-08-18 中国移动通信有限公司研究院 A kind of terminal, server and health detecting system
CN108305261A (en) * 2017-08-11 2018-07-20 腾讯科技(深圳)有限公司 Picture segmentation method, apparatus, storage medium and computer equipment
CN107731307A (en) * 2017-10-13 2018-02-23 安徽师范大学 A kind of physical health self-measuring system
CN110286124A (en) * 2018-03-14 2019-09-27 浙江大学山东工业技术研究院 Refractory brick measuring system based on machine vision
CN110320358A (en) * 2018-03-30 2019-10-11 深圳市贝沃德克生物技术研究院有限公司 Diabetic nephropathy biomarker detection device and method
CN109461482A (en) * 2018-05-29 2019-03-12 平安医疗健康管理股份有限公司 Health plan generation method, device, computer equipment and storage medium
CN109406506A (en) * 2018-12-06 2019-03-01 北京腾康汇医科技有限公司 A kind of shared self-rated health terminal and test method
CN109870448A (en) * 2019-02-09 2019-06-11 智锐达仪器科技南通有限公司 A kind of colloidal gold test paper card detecting instrument and control method is detected accordingly
CN110954555A (en) * 2019-12-26 2020-04-03 宋佳 WDT 3D vision detection system
CN111613333A (en) * 2020-05-29 2020-09-01 惠州Tcl移动通信有限公司 Self-service health detection method and device, storage medium and mobile terminal
CN111916203A (en) * 2020-06-18 2020-11-10 北京百度网讯科技有限公司 Health detection method and device, electronic equipment and storage medium
CN111899830A (en) * 2020-08-06 2020-11-06 苏州贝福加智能系统有限公司 Non-contact intelligent health detection system, detection method and detection device
CN112259238A (en) * 2020-10-20 2021-01-22 平安科技(深圳)有限公司 Electronic device, disease type detection method, apparatus, and medium
CN112890767A (en) * 2020-12-30 2021-06-04 浙江大学 Automatic detection device and method for health state of mouth, hands and feet

Also Published As

Publication number Publication date
CN113610205B (en) 2022-12-27

Similar Documents

Publication Publication Date Title
WO2021004112A1 (en) Anomalous face detection method, anomaly identification method, device, apparatus, and medium
CN109934847B (en) Method and device for estimating posture of weak texture three-dimensional object
DE112019000334T5 (en) VALIDATE THE IDENTITY OF A REMOTE USER BY COMPARISON ON THE BASIS OF THRESHOLD VALUES
CN111091075B (en) Face recognition method and device, electronic equipment and storage medium
CN109492642B (en) License plate recognition method, license plate recognition device, computer equipment and storage medium
CN111008935B (en) Face image enhancement method, device, system and storage medium
CN110969045B (en) Behavior detection method and device, electronic equipment and storage medium
WO2019056503A1 (en) Store monitoring evaluation method, device and storage medium
CN110659397A (en) Behavior detection method and device, electronic equipment and storage medium
CN112418009A (en) Image quality detection method, terminal device and storage medium
WO2019041660A1 (en) Face deblurring method and device
US20220245803A1 (en) Image enhancement processing method, device, equipment, and medium based on artificial intelligence
KR20190142553A (en) Tracking method and system using a database of a person's faces
CN115171197A (en) High-precision image information identification method, system, equipment and storage medium
CN113298158A (en) Data detection method, device, equipment and storage medium
CN111881740A (en) Face recognition method, face recognition device, electronic equipment and medium
CN110458971B (en) Classroom attendance recording method and device, computer equipment and storage medium
CN112991159B (en) Face illumination quality evaluation method, system, server and computer readable medium
CN110766077A (en) Method, device and equipment for screening sketch in evidence chain image
CN112204957A (en) White balance processing method and device, movable platform and camera
CN113610205B (en) Two-dimensional code generation method and device based on machine vision and storage medium
WO2022082401A1 (en) Noseprint recognition method and apparatus for pet, computer device, and storage medium
WO2021047453A1 (en) Image quality determination method, apparatus and device
CN112733901A (en) Structured action classification method and device based on federal learning and block chain
WO2019056492A1 (en) Contract investigation processing method, storage medium, and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant