CN113610205B - Two-dimensional code generation method and device based on machine vision and storage medium - Google Patents
Two-dimensional code generation method and device based on machine vision and storage medium Download PDFInfo
- Publication number
- CN113610205B CN113610205B CN202110800633.0A CN202110800633A CN113610205B CN 113610205 B CN113610205 B CN 113610205B CN 202110800633 A CN202110800633 A CN 202110800633A CN 113610205 B CN113610205 B CN 113610205B
- Authority
- CN
- China
- Prior art keywords
- image
- detection
- target
- determining
- straight line
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K19/00—Record carriers for use with machines and with at least a part designed to carry digital markings
- G06K19/06—Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
- G06K19/06009—Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
- G06K19/06037—Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking multi-dimensional coding
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Radiology & Medical Imaging (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Physics & Mathematics (AREA)
- Epidemiology (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a two-dimensional code generation method, a device and a storage medium based on machine vision, wherein the method comprises the steps of obtaining identity information of a user and a detection video, wherein the detection video is used for reflecting a detection result of a detection device, and the detection device is only used by the user; determining detection information corresponding to the user according to the detection video; and generating a two-dimensional code corresponding to the user according to the identity information and the detection information. The problem of among the prior art health code's state can only be with the testing result synchronous update of hospital, can't be with the testing result synchronous update of disease self-test box is solved.
Description
Technical Field
The invention relates to the field of artificial intelligence, in particular to a two-dimensional code generation method and device based on machine vision and a storage medium.
Background
The health status of people can only be determined by detection in hospitals in the past, and with the development of science and technology, a plurality of convenient disease self-testing boxes are available for people, so that the disease self-testing boxes can realize that users can carry out disease detection at home by themselves, and then whether the users need to go to the hospitals for detection or not is determined according to the results of the disease self-testing boxes, and a large amount of time cost is saved for the users. The health code released in China in the near future can primarily screen out healthy people, is convenient to pass, and reduces epidemic prevention pressure. However, at present, the state of the health code can only be updated synchronously with the detection result of the hospital, and cannot be updated synchronously with the detection result of the disease self-testing box, so that the user can only go to the hospital for detection, otherwise, the health code cannot be updated.
Thus, there is still a need for improvement and development of the prior art.
Disclosure of Invention
The technical problem to be solved by the present invention is to provide a two-dimensional code generation method, device and storage medium based on machine vision, aiming at solving the problem that the state of a health code in the prior art can only be updated synchronously with the detection result of a hospital and cannot be updated synchronously with the detection result of a disease self-testing box.
The technical scheme adopted by the invention for solving the problems is as follows:
in a first aspect, an embodiment of the present invention provides a two-dimensional code generation method based on machine vision, where the method includes:
acquiring identity information and a detection video of a user, wherein the detection video is used for reflecting a detection result of a detection device, and the detection device is only used by the user;
determining detection information corresponding to the user according to the detection video;
and generating a two-dimensional code corresponding to the user according to the identity information and the detection information.
In one embodiment, the determining, according to the detection video, detection information corresponding to the user includes:
obtaining a local image according to the detection video;
rotating the local image to obtain a rotated image, and vertically cutting the rotated image to obtain a corrected image;
intercepting a target image from the corrected image;
and carrying out image recognition on the target image and determining the detection information.
In one embodiment, the obtaining a local image according to the detection video includes:
generating a video frame image according to the detection video;
inputting the video frame image into a pre-trained image detection model to obtain position information corresponding to the detection device;
and clipping the video frame image according to the position information to obtain a local image.
In one embodiment, the rotating the partial image to obtain a rotated image and vertically cropping the rotated image to obtain a corrected image includes:
determining a first straight line and a second straight line according to the partial image, wherein the first straight line is used for reflecting the left boundary of the detection device, and the second straight line is used for reflecting the right boundary of the detection device;
determining a rotation angle according to the first straight line and the second straight line, and rotating the local image according to the rotation angle to obtain a rotated image;
acquiring a first vertical line corresponding to the first straight line and a second vertical line corresponding to the second straight line in the rotation image;
and vertically clipping the rotating image according to the first vertical line and the second vertical line to obtain the corrected image.
In one embodiment, the determining the first line and the second line from the partial image includes:
carrying out edge detection on the local image through an edge detection operator to obtain a plurality of edge points in the local image;
determining a plurality of target edge points from the plurality of edge points, and determining a first straight line and a second straight line according to the plurality of target edge points.
In one embodiment, the determining a plurality of target edge points from the plurality of edge points, and determining a first line and a second line according to the plurality of target edge points includes:
carrying out Hough transform on the edge points to obtain a plurality of transform points which are in one-to-one correspondence with the edge points;
and obtaining statistics values corresponding to the plurality of transformation points respectively, and performing non-maximum suppression on the plurality of transformation points according to the statistics values to obtain a plurality of target edge points.
In one embodiment, the intercepting a target image from the corrected image comprises:
acquiring parameter information of the detection device, and determining a zoom image corresponding to the correction image according to the parameter information;
acquiring a standard detection device image, and performing template matching on the standard detection device image and the zoomed image to obtain a target area in the zoomed image;
and intercepting the target area from the zoomed image to obtain the target image.
In an embodiment, the generating a two-dimensional code corresponding to the user according to the identity information and the detection information includes:
sending the identity information and the detection information to a background block chain for storage;
acquiring a hash value generated by the background block chain based on the identity information and the detection information;
inquiring a target block according to the hash value, and acquiring the detection information according to the target block;
and generating the two-dimensional code according to the detection information.
In a second aspect, an embodiment of the present invention further provides a two-dimensional code generating apparatus based on machine vision, where the apparatus includes:
the device comprises an acquisition module, a detection module and a processing module, wherein the acquisition module is used for acquiring identity information of a user and a detection video, the detection video is used for reflecting a detection result of a detection device, and the detection device is only used by the user;
the determining module is used for determining the detection information corresponding to the user according to the detection video;
and the generating module is used for generating the two-dimensional code corresponding to the user according to the identity information and the detection information.
In a third aspect, an embodiment of the present invention further provides a computer-readable storage medium, on which a plurality of instructions are stored, where the instructions are adapted to be loaded and executed by a processor to implement any of the steps of the machine-vision-based two-dimensional code generation method described above.
The invention has the beneficial effects that: the method comprises the steps of obtaining identity information of a user and a detection video, wherein the detection video is used for reflecting a detection result of a detection device, and the detection device is only used by the user; determining detection information corresponding to the user according to the detection video; and generating a two-dimensional code corresponding to the user according to the identity information and the detection information. The problem of health code's state can only be updated with the testing result of hospital in the prior art in step, can't be updated with the testing result of disease self-test box in step is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of a two-dimensional code generation method based on machine vision according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of a 2-class training sample provided in an embodiment of the present invention.
Fig. 3 is a schematic diagram of detecting a video according to an embodiment of the present invention.
Fig. 4 is a schematic flowchart of acquiring a target image according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of the operation of the neural network model according to the embodiment of the present invention.
Fig. 6 is a schematic diagram of the operation of the blockchain according to the embodiment of the present invention.
Fig. 7 is a connection diagram of internal modules of a two-dimensional code generation device based on machine vision according to an embodiment of the present invention.
Fig. 8 is a functional block diagram of a terminal according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and back … …) are involved in the embodiment of the present invention, the directional indications are only used to explain the relative position relationship between the components, the motion situation, and the like in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indications are changed accordingly.
The health status of people can only be determined by detection in hospitals in the past, and with the development of science and technology, a plurality of convenient disease self-testing boxes are available for people, so that the disease self-testing boxes can realize that users can carry out disease detection at home by themselves, and then whether the users need to go to the hospitals for detection or not is determined according to the results of the disease self-testing boxes, and a large amount of time cost is saved for the users. The health code released in China in the near future can primarily screen out healthy people, is convenient to pass, and reduces epidemic prevention pressure. However, at present, the state of the health code can only be updated synchronously with the detection result of the hospital, and cannot be updated synchronously with the detection result of the disease self-testing box, so that the user can only go to the hospital for detection, otherwise, the health code cannot be updated.
In view of the above drawbacks of the prior art, the present invention provides a two-dimensional code generation method based on machine vision, which includes obtaining identity information of a user and a detection video, where the detection video is used to reflect a detection result of a detection device, and the detection device is used only by the user; determining detection information corresponding to the user according to the detection video; and generating a two-dimensional code corresponding to the user according to the identity information and the detection information. The problem of among the prior art health code's state can only be with the testing result synchronous update of hospital, can't be with the testing result synchronous update of disease self-test box is solved.
As shown in fig. 1, the method comprises the steps of:
step S100, identity information of a user and a detection video are obtained, wherein the detection video is used for reflecting a detection result of a detection device, and the detection device is only used by the user.
Specifically, since the health code is a two-dimensional code subjected to real-name authentication, in order to enable the user to update the health code of the user based on the detection result of the self-testing kit, in this embodiment, the identity information of the user and the detection video are required to be acquired, and the detection video includes content of shooting the detection result generated by the detection device after the user uses the detection device for detection, so that the detection video can reflect the latest detection result of the user.
In one implementation, the embodiment may provide an application program for uploading the identity information of the user and detecting the video. In practical application, the user uploads identity information through the two-dimensional code on the scanning disease self-testing box to open the shooting function of this application, through the whole process record use disease self-testing of this shooting function and the process of detecting, guarantee not have the third person to assist and falsify the self-testing result, shoot and finish the back automatic upload and detect the video.
As shown in fig. 1, the method further comprises the steps of:
and S200, determining detection information corresponding to the user according to the detection video.
Specifically, since the detection result generated after the detection device is used by the user is captured in the detection video, the detection information corresponding to the user can be obtained by identifying and analyzing the detection video, and the detection information can be used as reference information for determining whether the user is ill or not.
In one implementation, the step S200 specifically includes the following steps:
step S201, obtaining a local image according to the detection video;
step S202, rotating the local image to obtain a rotated image, and vertically cutting the rotated image to obtain a corrected image;
step S203, intercepting a target image from the corrected image;
and step S204, carrying out image recognition on the target image and determining the detection information.
Since the detection video includes the image of the detection device, the present embodiment first cuts out a local image related to the detection device from the detection video, and since the local image includes not only the detection device but also many redundant images, such as an image related to a captured scene, etc., which have no effect on detecting information for identifying a user, the present embodiment needs to delete these redundant background images from the local image. Specifically, the present embodiment first rotates the partial image, so that the detection device in the partial image is located in the vertical direction, and a rotated image is obtained. Then, the rotated image is vertically cropped, and redundant backgrounds on the left and right sides of the detection device in the rotated image are cropped (as shown in fig. 4), so that a corrected image is obtained. Since only a part of the area of the detection device is usually used for displaying the detection result, the target image is cut out from the corrected image (as shown in fig. 4), and then the target image is subjected to image recognition to obtain the detection information of the user.
In an implementation manner, the step S201 specifically includes the following steps:
step S2011, generating a video frame image according to the detection video;
step S2012, inputting the video frame image into a pre-trained image detection model to obtain position information corresponding to the detection device;
and S2013, clipping the video frame image according to the position information to obtain a local image.
Specifically, the present embodiment trains an image detection model in advance, and inputs a video frame image generated based on a detection video into the image detection model, so as to obtain the position information of the detection device. Based on the position information, the video frame image can be cropped to eliminate redundant information in the video frame image, so as to obtain a local image, wherein the local image includes the detection device.
In order to ensure that no third party assists in falsifying the self-test result, the embodiment may also train a face detection model in advance, and require that the user must shoot the face of the user himself in the detection video shot by the user (as shown in fig. 3). After the detection video is obtained, inputting a video frame image generated based on the detection video into the face detection model to obtain the face features in the video frame image, determining detection person information according to the face features, matching the detection person information with the identity information of the user, and when the matching is successful, indicating that the user is a real detection person.
In an implementation manner, the step S202 specifically includes the following steps:
step S2021, determining a first straight line and a second straight line according to the local image, where the first straight line is used to reflect a left boundary of the detection apparatus, and the second straight line is used to reflect a right boundary of the detection apparatus;
step S2022, determining a rotation angle according to the first straight line and the second straight line, and rotating the local image according to the rotation angle to obtain a rotated image;
step S2023, acquiring a first perpendicular corresponding to the first straight line and a second perpendicular corresponding to the second straight line in the rotated image;
step S2024, performing vertical cropping on the rotated image according to the first vertical line and the second vertical line to obtain the corrected image.
Specifically, the present embodiment first determines two straight lines, i.e., a first straight line and a second straight line, reflecting the left and right boundaries of the detection device in the partial image. And then calculating an included angle between the first straight line and the second straight line relative to the y axis, wherein the included angle is a rotation angle, and rotating by taking the midpoint of the local image as a rotation center according to the rotation angle to obtain a rotation image. Since the first straight line and the second straight line are also rotated along with the partial image, the first straight line is the first perpendicular line and the second straight line is the second perpendicular line in the rotated image. Because the first vertical line and the second vertical line can reflect the left boundary and the right boundary of the detection device respectively, the redundant background on the left side of the detection device in the rotation image can be deleted according to the first vertical line, and the redundant background on the right side of the detection device in the rotation image can be deleted according to the second vertical line, so that the correction image is obtained.
For example, assume the equation for the first line is: x =0.001346 x y +112, and the equation of the second line is: x =0.001265 + y +401, since the left and right boundaries are a pair of parallel edges, k1 ≈ k2. The first straight line and the second straight line form an included angle theta relative to the y axis, wherein the calculation formula of theta is as follows: θ = arctan ((k 1+ k 2)/2) = arctan (0.0013055) =0.0747996 °. The partial image is rotated counterclockwise by θ =0.0747996 ° with the midpoint of the partial image as the rotation center, and a rotated image is obtained, in which the first straight line and the second straight line become the first vertical line and the second vertical line, respectively, that is, x =111, x =400. When the height of the rotated image is 1715, a (111,0) (400, 1715) region is cut out from the rotated image, and the left and right boundaries of the detection device are included in the region.
In an implementation manner, the determining a first straight line and a second straight line according to the local image specifically includes the following steps:
carrying out edge detection on the local image through an edge detection operator to obtain a plurality of edge points in the local image;
determining a plurality of target edge points from the edge points, and determining a first straight line and a second straight line according to the target edge points.
Specifically, the edge detection operator used in this embodiment is a canny operator, which is an optimization operator having multiple stages of filtering, enhancing, and detecting. In this embodiment, a local image is obtained by smoothing through a canny operator, noise of the local image is removed, a plurality of edge points are obtained, target edge points are screened from the edge points, and the target edge points are substituted into a preset first linear equation and a preset second linear equation, so that a first straight line and a second straight line are obtained.
In one implementation, the determining a plurality of target edge points from the plurality of edge points, and determining a first straight line and a second straight line according to the plurality of target edge points includes:
carrying out Hough transform on the edge points to obtain a plurality of transform points which are in one-to-one correspondence with the edge points;
and obtaining statistics values corresponding to the plurality of transformation points respectively, and performing non-maximum suppression on the plurality of transformation points according to the statistics values to obtain a plurality of target edge points.
Specifically, a plurality of edge points are obtained after edge detection is performed on the local image through a cannny operator, and then hough transform (hough transform) is adopted to convert the edge points into a kb parameter space, so as to obtain a plurality of transform points of the edge points in the kb parameter space, wherein the transform points are in one-to-one correspondence with the edge points. And then performing non-maximum suppression according to the statistic value of each transformation point in the kb parameter space to obtain a plurality of target edge points.
The principle of Hough transform is as follows: a straight line can be represented in the form of y = kx + b, kb being a parameter of this straight line. Assuming that the minimum unit of angle of one line is 1 °, there may be 180 lines passing through 1 point. A straight line of length 100 y = x +3 and an angle of 45 °. For 100 points of the straight line, each point is counted from 0 ° to 179 °, and there are 100 × 180 statistics, the statistics are stored in a two-dimensional matrix, and finally it is found that 100 statistics all fall on the point of the two-dimensional matrix (k = tan45 °, b = 3), and at other points, the statistics are much smaller than 100 or even 0. Such a matrix for recording statistics is called kb parameter space. The value v for each point (k, b) in the matrix indicates that there is a straight line of length v in the original y = kx + b. If all the straight lines with the length of more than 50 need to be found in the image, all the points with the v >50 are searched in the kb parameter space, and the corresponding coordinates of each point are the parameters of each straight line.
The detailed steps of the hough transform are as follows:
(1) Establishing a two-dimensional matrix as a kb parameter space, and initializing all elements to be 0;
(2) Searching black points (edge points) in the edge detection image to obtain coordinates (x, y);
(3) Let θ traverse from 0 ° to 179 °, substitute (x, y) into y = tan θ x + b, resulting in b = int (y-tan θ x), int representing the rounding. 180 sets (theta, b) can be obtained;
(4) Finding the corresponding coordinates of the 180 sets (θ, b) in the kb parameter space, the value at each coordinate being +1;
(5) And (3) searching all black points, and performing non-maximum suppression after finishing preliminary statistics: traversing each point in the kb parameter space, taking the value of 8 neighborhoods of each point for comparison, and setting the value of the current point to be 0 if the value of the current point is not the maximum value in the 8 neighborhoods; if the maximum value is obtained, reserving;
(6) At this point, the target edge points are further searched.
In one implementation, the number of target edge points consists of a pair of the transformation points, wherein the pair of transformation points satisfies the following condition:
1) The k value corresponding to the pair of transformation points is between-pi/4 and pi/4;
2) The absolute value of the difference between the b values corresponding to the pair of transformation points is greater than 1/3 of the width of the local image;
3) The absolute value of the difference between the b values corresponding to the pair of transformed points is smaller than the absolute value of the difference between the b values corresponding to all other transformed point pairs in the parameter space.
In an implementation manner, the step S203 specifically includes the following steps:
step S2031, obtaining parameter information of the detection device, and determining a zoom image corresponding to the correction image according to the parameter information;
step S2032, acquiring a standard detection device image, and performing template matching on the standard detection device image and the zoom image to obtain a target area in the zoom image;
and S2033, intercepting the target area from the zoomed image to obtain the target image.
Specifically, template matching is a matching method commonly used in image processing, and is used to study where a pattern of a specific object is located in an entire pattern, and further identify the object. In order to implement template matching on the corrected image, the embodiment needs to perform scaling processing on the corrected image first, and since the parameter information of the detection device can reflect the standard size of the detection device, the embodiment uses the parameter information to determine the scaling coefficient used in the scaling processing, so as to obtain the scaled image corresponding to the corrected image. And then, performing template matching on the zoomed image through a pre-stored standard detection device image to obtain a target area, wherein the target area is an area used for displaying a detection result by the detection device, and therefore, the target area is intercepted from the zoomed image to obtain a target image (as shown in fig. 4).
In one implementation, the target region may be determined using a correlation matching operation. Specifically, the standard detection device image and each area in the scaled image are used for correlation matching operation, and when the correlation of a certain area obtains the maximum value, the area is the target area. Wherein, the correlation matching formula is as follows:
where T is the standard inspection apparatus image and I is the scaled image.
For example, the width of the corrected image is calculated to be (400-111) =289, the standard width of the standard detection device image is calculated to be 534, and the scaled image 534 × 3169 is obtained by scaling the width of the corrected image 289 and the standard width 534. Then, the scaled image is used to perform template matching with the image of the standard detection device, and the coordinate of the vertical starting point of the detection device in the scaled image is obtained as (0,129). If the area for displaying the detection result in the standard detection device image is (152, 1000) (398, 1706), the target area in the scaled image is (152, 1129) (398, 1835), and the target area is cut out to obtain the target image.
In an implementation manner, the step S204 specifically includes the following steps:
step S2041, inputting the target image into a neural network model which is trained in advance to obtain a classification result corresponding to the target image;
step S2042, determining the detection information according to the classification result.
Specifically, the present embodiment trains a neural network model in advance, and the training process of the neural network model is as follows: a plurality of detection devices are prepared in advance, the plurality of detection devices respectively display different detection results, the plurality of detection devices are photographed under various environments, and a plurality of training images are obtained, wherein the plurality of training images are 2 classification samples, as shown in fig. 2, the left side is a negative training sample, and the right side is a positive training sample. And training and classifying the initial neural network model through a plurality of training images to obtain the trained neural network model. As shown in fig. 5, after the target image is input into the trained neural network model, the neural network model can perform inference on the target image to obtain a classification result, such as negative classification or positive classification, corresponding to the target image, and then generate the detection information of the user according to the classification result.
As shown in fig. 1, the method further comprises the steps of:
and S300, generating a two-dimensional code corresponding to the user according to the identity information and the detection information.
Specifically, in this embodiment, after associating the user identity information with the corresponding detection information, the two-dimensional code dedicated to the user may be generated, and since the two-dimensional code may reflect the detection information of the user, the classification of the crowd corresponding to the user may be determined according to the two-dimensional code, and then the release rule corresponding to the user may be determined. For example, a red two-dimensional code indicates that the user is a high-level crowd and needs to be isolated; the green two-dimensional code shows that the user is a healthy crowd and can be directly released.
In one implementation, the step S300 specifically includes the following steps:
step S301, sending the identity information and the detection information to a background block chain for storage;
step S302, a hash value generated by the background blockchain based on the identity information and the detection information is obtained;
step S303, inquiring a target block according to the hash value, and acquiring the detection information according to the target block;
and step S304, generating the two-dimensional code according to the detection information.
In order to ensure the universality of the two-dimensional code, the embodiment adopts a block chain technology to publish and store the detection information of the user. Specifically, after the user sends the identity information and the detection information to the background block chain, the background block chain generates a new block, i.e., a target block, in the self-test record of the user according to the identity information. And generating a hash value corresponding to the target block through hash operation, wherein the hash value is an index value corresponding to the target block and is used for searching the target block subsequently. And then storing the detection information into the target block. After the storage is finished, the related App of the user terminal can receive the hash value of the target block, the App background can inquire the target block according to the hash value, acquire the detection information stored in the target block, and generate the two-dimensional code corresponding to the user according to the detection information.
In one implementation, as shown in fig. 6, the method for generating a new chunk by the background blockchain is as follows:
when the background server is started, a port is opened to serve http, 2 initial empty lists are created, one empty list is used for storing a block chain, and the other empty list is used for storing user self-test records. Each block contains an index, timestamp, records, certificate, and hash. The hashcode is used to identify the previous block.
When the user uploads the identity information and the detection information, a new record is added to the self-test record corresponding to the user, and the index of the next record is returned. Each record represents a block. Each time a new chunk is generated, a PoW workload proof, i.e., the hash value of the chunk, is created for the chunk.
Specifically, the system tries to find a number in the current block, the number and the number obtained by analysis in the previous block are subjected to hash operation to generate a new hash code, the code of the previous N bits is required to conform to a specific sequence, and the number meeting the condition is the proof of the PoW workload. In one implementation, setting the new hashcode must satisfy the top 4-bit code coincidence sequence as rtus.
In one implementation mode, the embodiment builds a background server by using a flash, and uses an HTTP request to interact with a background Blockchain on a web as an independent node in a Blockchain network. And sets the address of the independent node as the recipient of the block. Thereby completing the building of the blockchain web project. In one implementation, the node address is penetrated by an intranet by third-party software, and the intranet address is set in the application program after being mapped into a public network address.
Based on the above embodiment, the present invention further provides a two-dimensional code generating apparatus based on machine vision, as shown in fig. 7, the apparatus includes:
the system comprises an acquisition module 01, a detection module and a display module, wherein the acquisition module is used for acquiring identity information of a user and a detection video, the detection video is used for reflecting a detection result of a detection device, and the detection device is only used by the user;
a determining module 02, configured to determine, according to the detection video, detection information corresponding to the user;
the generating module 03 is configured to generate a two-dimensional code corresponding to the user according to the identity information and the detection information.
Based on the above embodiment, the present invention further provides a terminal, and a functional block diagram of the terminal may be as shown in fig. 8. The terminal comprises a processor, a memory, a network interface and a display screen which are connected through a system bus. Wherein the processor of the terminal is configured to provide computing and control capabilities. The memory of the terminal comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the terminal is used for connecting and communicating with an external terminal through a network. The computer program is executed by a processor to implement a two-dimensional code generation method based on machine vision. The display screen of the terminal can be a liquid crystal display screen or an electronic ink display screen.
It will be appreciated by those skilled in the art that the block diagram of fig. 8 is only a block diagram of a part of the structure associated with the solution of the present invention, and does not constitute a limitation to the terminal to which the solution of the present invention is applied, and a specific terminal may include more or less components than those shown in the figure, or combine some components, or have a different arrangement of components.
In one implementation, one or more programs are stored in a memory of the terminal and configured to be executed by one or more processors include instructions for performing a machine-vision based two-dimensional code generation method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, databases or other media used in the embodiments provided herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
In summary, the present invention discloses a two-dimensional code generation method, device and storage medium based on machine vision, the method obtains identity information of a user and a detection video, wherein the detection video is used for reflecting a detection result of a detection device, and the detection device is used only by the user; determining detection information corresponding to the user according to the detection video; and generating a two-dimensional code corresponding to the user according to the identity information and the detection information. The problem of among the prior art health code's state can only be with the testing result synchronous update of hospital, can't be with the testing result synchronous update of disease self-test box is solved.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.
Claims (3)
1. A two-dimensional code generation method based on machine vision is characterized by comprising the following steps:
acquiring identity information and a detection video of a user, wherein the detection video is used for reflecting a detection result of a detection device, and the detection device is only used by the user;
determining detection information corresponding to the user according to the detection video;
generating a two-dimensional code corresponding to the user according to the identity information and the detection information;
the determining, according to the detection video, detection information corresponding to the user includes:
obtaining a local image according to the detection video;
rotating the local image to obtain a rotated image, and vertically cutting the rotated image to obtain a corrected image;
intercepting a target image from the corrected image;
carrying out image recognition on the target image and determining the detection information;
obtaining a local image according to the detection video, including:
generating a video frame image according to the detection video;
inputting the video frame image into a pre-trained image detection model to obtain position information corresponding to the detection device;
clipping the video frame image according to the position information to obtain a local image;
the rotating the local image to obtain a rotated image, and vertically cropping the rotated image to obtain a corrected image, includes:
determining a first straight line and a second straight line according to the local image, wherein the first straight line is used for reflecting the left boundary of the detection device, and the second straight line is used for reflecting the right boundary of the detection device;
determining a rotation angle according to the first straight line and the second straight line, and rotating the local image according to the rotation angle to obtain a rotated image;
acquiring a first perpendicular line corresponding to the first straight line and a second perpendicular line corresponding to the second straight line in the rotation image;
vertically clipping the rotated image according to the first vertical line and the second vertical line to obtain the corrected image;
determining a first line and a second line from the local image, comprising:
carrying out edge detection on the local image through an edge detection operator to obtain a plurality of edge points in the local image;
determining a plurality of target edge points from the plurality of edge points, and determining a first straight line and a second straight line according to the plurality of target edge points;
determining a plurality of target edge points from the plurality of edge points, and determining a first straight line and a second straight line according to the plurality of target edge points, including:
carrying out Hough transform on the edge points to obtain a plurality of transform points which are in one-to-one correspondence with the edge points;
obtaining statistics values corresponding to the plurality of transformation points respectively, and performing non-maximum suppression on the plurality of transformation points according to the statistics values to obtain a plurality of target edge points;
the plurality of transformation points satisfy the following conditions including:
the k value corresponding to the transformation point is between-pi/4 and pi/4;
the absolute value of the difference between the b values corresponding to the transformation points is greater than 1/3 of the width of the local image;
the absolute value of the difference between the b values corresponding to the transformation points is smaller than the absolute value of the difference between the b values corresponding to all other transformation points in the parameter space;
the intercepting a target image from the corrected image comprises:
acquiring parameter information of the detection device, and determining a zoom image corresponding to the correction image according to the parameter information;
acquiring a standard detection device image, and performing template matching on the standard detection device image and the zoomed image to obtain a target area in the zoomed image;
intercepting the target area from the zoomed image to obtain the target image;
the image recognition of the target image and the determination of the detection information include:
inputting the target image into a neural network model which is trained in advance to obtain a classification result corresponding to the target image;
determining the detection information according to the classification result;
the generating the two-dimensional code corresponding to the user according to the identity information and the detection information includes:
sending the identity information and the detection information to a background block chain for storage;
acquiring a hash value generated by the background block chain based on the identity information and the detection information;
inquiring a target block according to the hash value, and acquiring the detection information according to the target block;
generating the two-dimensional code according to the detection information;
the process of generating a new block by the background block chain comprises the following steps:
creating an initial empty list for storing a block chain and an initial empty list for storing user self-test records;
when a user uploads identity information and detection information, adding a new record to an initial empty list for storing user self-test records, and returning an index of a next record, wherein each record corresponds to a block, and each block corresponds to a PoW workload certificate;
aiming at the PoW workload certification corresponding to each block, the PoW workload certification proves that the corresponding number and the number obtained by analyzing the previous block of the block are subjected to Hash operation to obtain the first N bits of a new Hash value which are in accordance with a specific sequence.
2. A two-dimensional code generation device based on machine vision, the device comprising:
the system comprises an acquisition module, a detection module and a display module, wherein the acquisition module is used for acquiring identity information and a detection video of a user, the detection video is used for reflecting a detection result of a detection device, and the detection device is only used by the user;
the determining module is used for determining the detection information corresponding to the user according to the detection video;
the generating module is used for generating a two-dimensional code corresponding to the user according to the identity information and the detection information;
the determining detection information corresponding to the user according to the detection video includes:
obtaining a local image according to the detection video;
rotating the local image to obtain a rotated image, and vertically cutting the rotated image to obtain a corrected image;
intercepting a target image from the corrected image;
carrying out image recognition on the target image and determining the detection information;
obtaining a local image according to the detection video, including:
generating a video frame image according to the detection video;
inputting the video frame image into a pre-trained image detection model to obtain position information corresponding to the detection device;
clipping the video frame image according to the position information to obtain a local image;
the rotating the local image to obtain a rotated image, and vertically cropping the rotated image to obtain a corrected image, includes:
determining a first straight line and a second straight line according to the local image, wherein the first straight line is used for reflecting the left boundary of the detection device, and the second straight line is used for reflecting the right boundary of the detection device;
determining a rotation angle according to the first straight line and the second straight line, and rotating the local image according to the rotation angle to obtain a rotated image;
acquiring a first perpendicular line corresponding to the first straight line and a second perpendicular line corresponding to the second straight line in the rotation image;
vertically clipping the rotated image according to the first vertical line and the second vertical line to obtain the corrected image;
determining a first line and a second line according to the local image, including:
carrying out edge detection on the local image through an edge detection operator to obtain a plurality of edge points in the local image;
determining a plurality of target edge points from the plurality of edge points, and determining a first straight line and a second straight line according to the plurality of target edge points;
determining a plurality of target edge points from the plurality of edge points, and determining a first straight line and a second straight line according to the plurality of target edge points, including:
carrying out Hough transform on the edge points to obtain a plurality of transform points which are in one-to-one correspondence with the edge points;
obtaining statistics values corresponding to the plurality of transformation points respectively, and performing non-maximum suppression on the plurality of transformation points according to the statistics values to obtain a plurality of target edge points;
the plurality of transformation points satisfy the following conditions including:
the k value corresponding to the transformation point is between-pi/4 and pi/4;
the absolute value of the difference between the b values corresponding to the transformation points is greater than 1/3 of the width of the local image;
the absolute value of the difference between the b values corresponding to the transformation points is smaller than the absolute value of the difference between the b values corresponding to all other transformation points in the parameter space;
the intercepting a target image from the corrected image comprises:
acquiring parameter information of the detection device, and determining a zoom image corresponding to the correction image according to the parameter information;
acquiring a standard detection device image, and performing template matching on the standard detection device image and the zoomed image to obtain a target area in the zoomed image;
intercepting the target area from the zoomed image to obtain the target image;
the image recognition of the target image and the determination of the detection information include:
inputting the target image into a neural network model which is trained in advance to obtain a classification result corresponding to the target image;
determining the detection information according to the classification result;
the generating the two-dimensional code corresponding to the user according to the identity information and the detection information includes:
sending the identity information and the detection information to a background block chain for storage;
obtaining a hash value generated by the background block chain based on the identity information and the detection information;
inquiring a target block according to the hash value, and acquiring the detection information according to the target block;
generating the two-dimensional code according to the detection information;
the process of generating a new block by the background blockchain comprises the following steps:
creating an initial empty list for storing a block chain and an initial empty list for storing user self-test records;
when a user uploads identity information and detection information, adding a new record to an initial empty list for storing user self-test records, and returning an index of a next record, wherein each record corresponds to a block, and each block corresponds to a PoW workload certificate;
aiming at the PoW workload certification corresponding to each block, the PoW workload certification proves that the corresponding number and the number obtained by analyzing the previous block of the block are subjected to Hash operation to obtain the first N bits of a new Hash value which are in accordance with a specific sequence.
3. A computer-readable storage medium having stored thereon a plurality of instructions, wherein the instructions are adapted to be loaded and executed by a processor to implement the steps of the machine-vision-based two-dimensional code generation method as claimed in claim 1.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110800633.0A CN113610205B (en) | 2021-07-15 | 2021-07-15 | Two-dimensional code generation method and device based on machine vision and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110800633.0A CN113610205B (en) | 2021-07-15 | 2021-07-15 | Two-dimensional code generation method and device based on machine vision and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113610205A CN113610205A (en) | 2021-11-05 |
CN113610205B true CN113610205B (en) | 2022-12-27 |
Family
ID=78337620
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110800633.0A Active CN113610205B (en) | 2021-07-15 | 2021-07-15 | Two-dimensional code generation method and device based on machine vision and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113610205B (en) |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105455789A (en) * | 2014-09-09 | 2016-04-06 | 曲刚 | Unattended self-help health information collecting system and method based on network technique |
CN106156517A (en) * | 2016-07-22 | 2016-11-23 | 广东工业大学 | The self-service automatic checkout system in a kind of human body basic disease community |
CN106338596A (en) * | 2016-08-24 | 2017-01-18 | 四川长虹通信科技有限公司 | Health monitoring method, health monitoring apparatus, and electronic equipment |
CN206420781U (en) * | 2016-12-22 | 2017-08-18 | 中国移动通信有限公司研究院 | A kind of terminal, server and health detecting system |
CN107731307A (en) * | 2017-10-13 | 2018-02-23 | 安徽师范大学 | A kind of physical health self-measuring system |
CN109406506A (en) * | 2018-12-06 | 2019-03-01 | 北京腾康汇医科技有限公司 | A kind of shared self-rated health terminal and test method |
CN109870448A (en) * | 2019-02-09 | 2019-06-11 | 智锐达仪器科技南通有限公司 | A kind of colloidal gold test paper card detecting instrument and control method is detected accordingly |
CN110320358A (en) * | 2018-03-30 | 2019-10-11 | 深圳市贝沃德克生物技术研究院有限公司 | Diabetic nephropathy biomarker detection device and method |
CN111613333A (en) * | 2020-05-29 | 2020-09-01 | 惠州Tcl移动通信有限公司 | Self-service health detection method and device, storage medium and mobile terminal |
CN111899830A (en) * | 2020-08-06 | 2020-11-06 | 苏州贝福加智能系统有限公司 | Non-contact intelligent health detection system, detection method and detection device |
CN111916203A (en) * | 2020-06-18 | 2020-11-10 | 北京百度网讯科技有限公司 | Health detection method and device, electronic equipment and storage medium |
CN112890767A (en) * | 2020-12-30 | 2021-06-04 | 浙江大学 | Automatic detection device and method for health state of mouth, hands and feet |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008168811A (en) * | 2007-01-12 | 2008-07-24 | Honda Motor Co Ltd | Traffic lane recognition device, vehicle, traffic lane recognition method, and traffic lane recognition program |
US8805117B2 (en) * | 2011-07-19 | 2014-08-12 | Fuji Xerox Co., Ltd. | Methods for improving image search in large-scale databases |
CN108305261A (en) * | 2017-08-11 | 2018-07-20 | 腾讯科技(深圳)有限公司 | Picture segmentation method, apparatus, storage medium and computer equipment |
CN108896547A (en) * | 2018-03-14 | 2018-11-27 | 浙江大学山东工业技术研究院 | Refractory brick measuring system based on machine vision |
CN109461482A (en) * | 2018-05-29 | 2019-03-12 | 平安医疗健康管理股份有限公司 | Health plan generation method, device, computer equipment and storage medium |
CN110954555A (en) * | 2019-12-26 | 2020-04-03 | 宋佳 | WDT 3D vision detection system |
CN112259238A (en) * | 2020-10-20 | 2021-01-22 | 平安科技(深圳)有限公司 | Electronic device, disease type detection method, apparatus, and medium |
-
2021
- 2021-07-15 CN CN202110800633.0A patent/CN113610205B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105455789A (en) * | 2014-09-09 | 2016-04-06 | 曲刚 | Unattended self-help health information collecting system and method based on network technique |
CN106156517A (en) * | 2016-07-22 | 2016-11-23 | 广东工业大学 | The self-service automatic checkout system in a kind of human body basic disease community |
CN106338596A (en) * | 2016-08-24 | 2017-01-18 | 四川长虹通信科技有限公司 | Health monitoring method, health monitoring apparatus, and electronic equipment |
CN206420781U (en) * | 2016-12-22 | 2017-08-18 | 中国移动通信有限公司研究院 | A kind of terminal, server and health detecting system |
CN107731307A (en) * | 2017-10-13 | 2018-02-23 | 安徽师范大学 | A kind of physical health self-measuring system |
CN110320358A (en) * | 2018-03-30 | 2019-10-11 | 深圳市贝沃德克生物技术研究院有限公司 | Diabetic nephropathy biomarker detection device and method |
CN109406506A (en) * | 2018-12-06 | 2019-03-01 | 北京腾康汇医科技有限公司 | A kind of shared self-rated health terminal and test method |
CN109870448A (en) * | 2019-02-09 | 2019-06-11 | 智锐达仪器科技南通有限公司 | A kind of colloidal gold test paper card detecting instrument and control method is detected accordingly |
CN111613333A (en) * | 2020-05-29 | 2020-09-01 | 惠州Tcl移动通信有限公司 | Self-service health detection method and device, storage medium and mobile terminal |
CN111916203A (en) * | 2020-06-18 | 2020-11-10 | 北京百度网讯科技有限公司 | Health detection method and device, electronic equipment and storage medium |
CN111899830A (en) * | 2020-08-06 | 2020-11-06 | 苏州贝福加智能系统有限公司 | Non-contact intelligent health detection system, detection method and detection device |
CN112890767A (en) * | 2020-12-30 | 2021-06-04 | 浙江大学 | Automatic detection device and method for health state of mouth, hands and feet |
Also Published As
Publication number | Publication date |
---|---|
CN113610205A (en) | 2021-11-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110969045B (en) | Behavior detection method and device, electronic equipment and storage medium | |
WO2020083111A1 (en) | Liveness detection method and device, electronic apparatus, storage medium and related system using the liveness detection method | |
CN110659397B (en) | Behavior detection method and device, electronic equipment and storage medium | |
WO2022206319A1 (en) | Image processing method and apparatus, and device, storage medium and computer program product | |
CN111091075B (en) | Face recognition method and device, electronic equipment and storage medium | |
CN112418009A (en) | Image quality detection method, terminal device and storage medium | |
US20210166015A1 (en) | Certificate image extraction method and terminal device | |
DE112019000334T5 (en) | VALIDATE THE IDENTITY OF A REMOTE USER BY COMPARISON ON THE BASIS OF THRESHOLD VALUES | |
CN110807491A (en) | License plate image definition model training method, definition detection method and device | |
US12014498B2 (en) | Image enhancement processing method, device, equipment, and medium based on artificial intelligence | |
CN113269149B (en) | Method and device for detecting living body face image, computer equipment and storage medium | |
CN109934847A (en) | The method and apparatus of weak texture three-dimension object Attitude estimation | |
KR20190142553A (en) | Tracking method and system using a database of a person's faces | |
CN110458971B (en) | Classroom attendance recording method and device, computer equipment and storage medium | |
CN110766077A (en) | Method, device and equipment for screening sketch in evidence chain image | |
CN111881740A (en) | Face recognition method, face recognition device, electronic equipment and medium | |
CN112733901A (en) | Structured action classification method and device based on federal learning and block chain | |
WO2021051382A1 (en) | White balance processing method and device, and mobile platform and camera | |
CN113344961B (en) | Image background segmentation method, device, computing equipment and storage medium | |
CN112991159B (en) | Face illumination quality evaluation method, system, server and computer readable medium | |
WO2021047453A1 (en) | Image quality determination method, apparatus and device | |
CN113610205B (en) | Two-dimensional code generation method and device based on machine vision and storage medium | |
WO2022082401A1 (en) | Noseprint recognition method and apparatus for pet, computer device, and storage medium | |
CN110210425B (en) | Face recognition method and device, electronic equipment and storage medium | |
CN112101296A (en) | Face registration method, face verification method, device and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |