WO2021258991A1 - 目标轮廓圈定方法、装置、计算机系统及可读存储介质 - Google Patents

目标轮廓圈定方法、装置、计算机系统及可读存储介质 Download PDF

Info

Publication number
WO2021258991A1
WO2021258991A1 PCT/CN2021/096666 CN2021096666W WO2021258991A1 WO 2021258991 A1 WO2021258991 A1 WO 2021258991A1 CN 2021096666 W CN2021096666 W CN 2021096666W WO 2021258991 A1 WO2021258991 A1 WO 2021258991A1
Authority
WO
WIPO (PCT)
Prior art keywords
contour
image
coordinates
key
target
Prior art date
Application number
PCT/CN2021/096666
Other languages
English (en)
French (fr)
Inventor
李成玲
夏明浩
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021258991A1 publication Critical patent/WO2021258991A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to a method, device, computer system, and readable storage medium for delineating a target contour. It uses computer vision technology that uses artificial intelligence.
  • the inventor realizes that the system developer cannot know which parts of the image are recognized by the face recognition system, resulting in The face recognition system has become a kind of "black box" existence, causing developers to be unable to know the working process of the face recognition system, that is, they cannot know which parts of the image are recognized by the image recognition system, and thus cannot recognize the face recognition system. Make accurate adjustments.
  • the purpose of this application is to provide a target contour delineation method, device, computer system and readable storage medium, which are used to solve the problem in the prior art that it is impossible to know which parts of the image are recognized by the image recognition system; this application can It is applied in smart government affairs scenarios to promote the construction of smart cities.
  • this application provides a target contour delineation method, which includes:
  • the recognition system in the background interface to calculate the input parameters to obtain the recognition result, and send the recognition result to the front-end device; wherein the recognition result includes the contour coordinates that characterize the contour of the target area in the image;
  • a target contour delineation device which includes:
  • the image input module is used to receive the image to be recognized, and call the conversion tool of the front-end device to convert the image to be recognized into base64 encoding as the input parameter;
  • the input parameter calculation module is used to call the image recognition system in the background interface to calculate the input parameters to obtain the recognition result, and send the recognition result to the front-end device; wherein the recognition result includes the contour coordinates representing the contour of the target area in the image ;
  • the contour positioning module is used to call the contour conversion algorithm of the front-end device, calculate the contour coordinates to obtain contour positioning information;
  • the recognition drawing module is used to call the stacking style sheet of the front-end device and draw a recognition frame on the image to be recognized according to the outline positioning information.
  • the present application also provides a computer system, which includes a plurality of computer devices, each computer device includes a memory, a processor, and a computer program stored in the memory and running on the processor.
  • the processor of the device executes the computer program, the above-mentioned target contour delineation method is jointly realized; the target contour delineation method includes:
  • the recognition system in the background interface to calculate the input parameters to obtain the recognition result, and send the recognition result to the front-end device; wherein the recognition result includes the contour coordinates that characterize the contour of the target area in the image;
  • the present application also provides a computer-readable storage medium, which includes multiple storage media, each of which stores a computer program, and when the computer program stored in the multiple storage media is executed by a processor Jointly realize the above-mentioned target contour delineation method;
  • Target contour delineation methods including:
  • the recognition system in the background interface to calculate the input parameters to obtain the recognition result, and send the recognition result to the front-end device; wherein the recognition result includes the contour coordinates that characterize the contour of the target area in the image;
  • the target contour delineation method, device, computer system and readable storage medium provided in this application draw a recognition frame on the image to be recognized based on the contour positioning information through a cascading style sheet. Since the contour positioning information is generated based on the contour coordinates of the recognition result, Therefore, the area enclosed by the recognition frame drawn according to the contour positioning information can accurately reflect the recognition target of the image recognition system on the image to be recognized, which solves the problem that the current system developers cannot know which parts of the image recognition system have performed on the image.
  • the problem of recognition provides developers with the working process of the image recognition system, so that the developers can optimize and adjust the image recognition system according to the working process.
  • FIG. 1 is a flowchart of Embodiment 1 of a method for delineating a target contour of this application;
  • FIG. 2 is a schematic diagram of the environmental application of the target contour delineation method in the second embodiment of the target contour delineation method of the present application;
  • FIG. 3 is a specific method flow chart of the target contour delineation method in the second embodiment of the target contour delineation method of the present application;
  • FIG. 4 is a schematic diagram of program modules of Embodiment 3 of the target contour delimiting device of the present application.
  • FIG. 5 is a schematic diagram of the hardware structure of the computer equipment in the fourth embodiment of the computer system of this application.
  • the target contour delineation method, device, computer system, and readable storage medium provided in this application are applicable to the field of artificial intelligence image detection technology, and provide an image-based parameter input module, parameter input calculation module, contour positioning module, and recognition drawing The target contour circle method of the module.
  • the conversion tool of the front-end device is called to convert the image to be recognized into base64 encoding as an input parameter; the image recognition system in the background interface is called to calculate the input parameter to obtain the recognition result, and the recognition result is sent to the front-end device; Calling the contour conversion algorithm of the front-end device, calculating the contour coordinates to obtain contour positioning information; calling the stacking style sheet of the front-end device and drawing a recognition frame on the image to be recognized according to the contour positioning information.
  • a method for delineating a target contour of this embodiment which is applied to an authentication server with an authentication program, includes:
  • S101 Receive an image to be recognized, and call a conversion tool of the front-end device to convert the image to be recognized into base64 encoding as an input parameter;
  • S102 Call the image recognition system in the background interface to calculate the input parameters to obtain a recognition result, and send the recognition result to the front-end device; wherein the recognition result includes contour coordinates that characterize the contour of the target area in the image;
  • S103 Call the contour conversion algorithm of the front-end device, calculate the contour coordinates to obtain contour positioning information
  • S105 Call the stacking style sheet of the front-end device and draw a recognition frame on the image to be recognized according to the contour positioning information.
  • the image to be recognized is converted to base64 encoding by calling the conversion of the front-end equipment, and the base64 encoding is used as the input parameter of the image recognition system in the background interface, so that the image recognition system can add the image corresponding to the encoding
  • the target area is recognized; the input parameters are calculated by calling the image recognition system in the background interface, and the recognition result of the user characterizing the recognition target is obtained, so as to realize the technical effect of target recognition in the image;
  • the contour coordinates in the recognition result are calculated by the contour conversion algorithm to make it the contour positioning information that can be recognized and processed by the cascading style sheet, so that the recognition target can be drawn in the form of contour lines on the image to be recognized on the front-end equipment, and the solution is solved
  • the data output by the image recognition system cannot visually display and delineate the problem of the recognition target on the image to be recognized;
  • a recognition frame is drawn on the image to be recognized according to the contour positioning information through the cascading style sheet of the front-end device. Since the contour positioning information is generated according to the contour coordinates of the recognition result, the area delineated by the recognition frame drawn according to the contour positioning information, It can accurately reflect the recognition target of the image recognition system on the image to be recognized;
  • the current problem that system developers cannot know which parts of the image are recognized by the image recognition system is solved, and the working process of the image recognition system is provided for the developers, so that the developers can compare the image according to the working process.
  • the recognition system is optimized and adjusted.
  • This application can be applied to smart government affairs scenarios to promote the construction of smart cities.
  • This embodiment is a specific application scenario of the foregoing Embodiment 1. Through this embodiment, the method provided by this application can be described more clearly and specifically.
  • the recognition target in the image to be recognized is obtained and the recognition frame is drawn.
  • the method provided in this embodiment will be described in detail. It should be noted that this embodiment is only exemplary, and does not limit the protection scope of the embodiment of this application.
  • Fig. 2 schematically shows an environmental application diagram of the method for delineating a target contour according to the second embodiment of the present application.
  • the server 2 where the target contour delineation method is located is a front-end device 3 and a back-end interface 4 respectively through a network.
  • the server 2 may provide services through one or more networks, and the network may include various network devices, such as Routers, switches, multiplexers, hubs, modems, bridges, repeaters, firewalls, proxy devices and/or etc.
  • the network may include physical links, such as coaxial cable links, twisted pair cable links, optical fiber links, combinations thereof, and/or the like.
  • the network 3 may include a wireless link, such as a cellular link, a satellite link, a Wi-Fi link and/or the like; the front-end device 3 may be a computer device such as a smart phone, a tablet computer, a notebook computer, a desktop computer, etc.
  • the background interface 4 may be an API interface provided by a server running an image recognition system.
  • FIG. 3 is a specific method flowchart of a target contour demarcation method provided by an embodiment of the present application, and the method specifically includes steps S201 to S208.
  • S201 Receive an image to be recognized, and call a conversion tool of a front-end device to convert the image to be recognized into base64 encoding as an input parameter.
  • this step converts the image to be recognized into base64 encoding by calling the conversion of the front-end equipment, and uses the base64 encoding as the input parameter of the image recognition system in the background interface to facilitate
  • the image recognition system recognizes the target area in the image corresponding to the code, where the target area refers to the recognition target of the image recognition system, such as a human face, a specific object, and so on.
  • Input Parameter is consistent with the definition of variables, and the variable names can be omitted for parameters that are not used.
  • Base64 encoding is a kind of data that can transform any byte array data through an algorithm to generate string data represented by only (uppercase and lowercase English, numbers, +, /) (64 characters in total), that is, convert any content into visible In the form of a string.
  • S202 Call the image recognition system in the background interface to calculate the input parameters to obtain a recognition result, and send the recognition result to the front-end device; wherein the recognition result includes contour coordinates that characterize the contour of the target area in the image.
  • the recognition result further includes key coordinates representing key points in the target area.
  • the contour of the target area is the contour of the area where the recognition target of the image recognition system is located, for example, the area of a human face and a specific object.
  • the key point is an area or part that characterizes the identification target with a characteristic feature. For example, if the identification target is a human face, the key points can be eyes, nose tip, and corner of mouth.
  • the recognition of the target and its key points in the image through the image recognition system belongs to the existing technology of the image recognition system.
  • the problem solved by this application is how to display the recognition result of the image recognition system on the image to be recognized. Therefore, the technical principle of the image recognition system to recognize the target to obtain the outline and key points of the target area will not be repeated in this application.
  • this step calculates the input parameters by calling the image recognition system in the background interface, and obtains the recognition result of the user characterizing the recognition target, so as to realize the technical effect of target recognition in the image.
  • the contour of the target area is a rectangle
  • the contour coordinates include the coordinates of the top left vertex of the rectangle and the coordinates of the bottom right vertex of the rectangle.
  • the obtained recognition result may be as follows;
  • This part is the key point coordinates, where (X1, Y1), (X2, Y2), (X3, Y3), (X4, Y4), (X5, Y5) are the key coordinates of the key points in the target area/
  • S203 Invoke the contour conversion algorithm of the front-end device, and calculate the contour coordinates to obtain contour positioning information.
  • this step uses the contour conversion algorithm to calculate the contour coordinates in the recognition result to make it a cascading style table.
  • the outline of the target area is a rectangle
  • the outline coordinates include the coordinates of the upper left vertex of the rectangle and the coordinates of the lower right vertex of the rectangle, and the coordinates of the upper left vertex of the rectangle are taken as the starting point Coordinates (X01, Y01), use the coordinates of the bottom right vertex of the rectangle as the end point coordinates (X02, Y02); call the contour conversion algorithm to calculate the contour coordinates to obtain contour positioning information.
  • the contour positioning information includes left margin left and top margin top , Frame width and frame height.
  • the left margin left refers to the distance value from the left side of the ancestor element
  • the top margin top refers to the distance value from the top of the ancestor element
  • the frame width width refers to the width of the outline of the target area (in this embodiment, the face frame)
  • Frame height refers to the height of the outline of the target area (in this embodiment, the face frame).
  • the target formula in the contour conversion algorithm is called as follows:
  • (X01, Y01) are the coordinates of the start point
  • (X02, Y02) are the coordinates of the end point.
  • the left margin is finally 248, the top margin is 66, the frame width is 126, and the frame height is 173.
  • S204 Invoke the key conversion algorithm of the front-end device, and convert the key coordinates into key positioning information.
  • this step uses the key conversion algorithm to calculate the key coordinates in the recognition result to make it a cascading style sheet
  • the key positioning information that can be identified and processed, so as to mark the area or part of the identification target with landmark characteristics on the image to be identified on the front-end device by marking the key points.
  • the key coordinates are the coordinates of the vertex pixels of the upper left corner of the key point, and the key conversion algorithm is called to calculate the key coordinates to obtain contour positioning information.
  • the key positioning information includes left margin left, top margin top, Frame width and frame height.
  • the left margin left refers to the distance value from the left side of the ancestor element
  • the top margin top refers to the distance value from the top of the ancestor element
  • the frame width width refers to the width of the outline of the target area (in this embodiment, the face frame)
  • Frame height refers to the height of the outline of the target area (in this embodiment, the face frame).
  • x is the abscissa of the key point
  • y is the ordinate of the key point
  • a and B are preset constants respectively.
  • a and B are 4px respectively.
  • the key positioning information of the key point with coordinates (x1, y1) is that the left margin is 278, the top margin is 138, the frame width is 4, and the frame height is 4;
  • the key positioning information of the key point with coordinates (x2, y2) is that the left margin is 335, the top margin is 136, the frame width is 4, and the frame height is 4;
  • the key positioning information of the key point with coordinates (x3, y3) is that the left margin is 303, the top margin is 173, the frame width is 4, and the frame height is 4;
  • the key positioning information of the key point with coordinates (x4, y4) is that the left margin is 281, the top margin is 193, the frame width is 4, and the frame height is 4;
  • the key positioning information of the key point with coordinates (x1, y1) is that the left margin is 337, the top margin is 191, the frame width is 4, and the frame height is 4.
  • S205 Call the stacking style sheet of the front-end device and draw a recognition frame on the image to be recognized according to the contour positioning information.
  • this step uses the stacked style sheet of the front-end device to draw a recognition frame on the image to be recognized based on the contour positioning information, because the contour positioning information is based on the contour of the recognition result.
  • the coordinates are generated, so the area enclosed by the recognition frame drawn according to the contour positioning information can accurately reflect the recognition target of the image recognition system on the image to be recognized.
  • the cascading style sheet to draw the corresponding recognition frame on the image to be recognized according to the contour positioning information (including: the left margin is 248, the top margin is 66, the frame width is 126, and the frame height is 173) .
  • S206 Call the cascading style sheet of the front-end device and mark key points on the image to be recognized according to the key positioning information.
  • this step uses the cascading style sheet of the front-end device to mark the key points on the image to be recognized based on the key positioning information. It is generated according to the key coordinates of the recognition result, so the area enclosed by the recognition frame drawn according to the key positioning information can accurately reflect the key points recognized by the image recognition system on the image to be recognized.
  • the coordinates are (x1, y1), (x2, y2), (x3, y3), (x4, y4), (x5, y5) Key positioning information), fill in color on the image to be recognized to mark the corresponding key points.
  • the recognition target in the image should be obtained through “coordinates", but the coordinates have a reference system (different units, such as: cm/pixel; or different reference points, such as, The lower left reference point of the front-end image is (10,50), and the lower left reference point of the back-end interface image is (0,0). The position is different.)
  • this application obtains the "absolute positioning" information of the recognition target through the contour conversion algorithm and the key conversion algorithm, that is, the contour positioning information and the key positioning information that characterize the absolute positioning of the recognition frame and the key points. According to the "absolute positioning" information, it is ensured that even if the back-end interface and the front-end equipment are in different reference systems, the technical effect of correctly delineating the contour of the recognition target on the image to be recognized is ensured.
  • S207 Call the drawing algorithm of the front-end device to cut the image enclosed by the recognition frame to generate a target image.
  • this step cuts the image enclosed by the recognition frame by calling the drawing algorithm and generates the target image, so that the user only needs to observe the obtained target image, and it also helps The developer of the image recognition system intuitively judges whether the recognition accuracy of the image recognition system meets the accuracy requirements.
  • converting the contour positioning information into cutting information includes: setting the left margin of the contour positioning information to sx, that is, the x coordinate position where the cutting starts.
  • the preset parameter x is obtained, that is, the x coordinate position where the image is placed on the canvas. In this embodiment, x can be set to 0.
  • Obtain the preset parameter y that is, the y coordinate position of the image placed on the canvas.
  • y can be set to 0.
  • Obtain the preset parameter width that is, the width of the image to be used.
  • the parameter width can be set to 90px.
  • the preset parameter height is obtained, that is, the height of the image to be used.
  • the parameter height can be set to 90px.
  • the method further includes:
  • the corresponding summary information is obtained based on the target image.
  • the summary information is obtained by hash processing of the target image, for example, obtained by the sha256s algorithm.
  • Uploading summary information to the blockchain can ensure its security and fairness and transparency to users.
  • the user equipment can download the summary information from the blockchain to verify whether the target image has been tampered with.
  • the blockchain referred to in this example is a new application mode of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain is essentially a decentralized database. It is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information for verification. The validity of the information (anti-counterfeiting) and the generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • the image to be recognized drawn with the recognition frame and/or marked with key points is displayed on the same display interface as the target image and the recognition result, so that the user can observe the image to be recognized at the same time.
  • a target contour delineation device 1 of this embodiment includes:
  • the image input module 11 is used to receive the image to be recognized, and call the conversion tool of the front-end device to convert the image to be recognized into base64 encoding as an input parameter;
  • the input parameter calculation module 12 is used to call the image recognition system in the background interface to calculate the input parameters to obtain the recognition result, and send the recognition result to the front-end device; wherein, the recognition result includes a contour characterizing the contour of the target area in the image coordinate;
  • the contour positioning module 13 is used to call the contour conversion algorithm of the front-end device, calculate the contour coordinates to obtain contour positioning information;
  • the recognition drawing module 15 is used to call the stacking style sheet of the front-end device and draw a recognition frame on the image to be recognized according to the outline positioning information.
  • the target contour delimiting device 1 further includes:
  • the key positioning module 14 is used to call the key conversion algorithm of the front-end device to convert the key coordinates into key positioning information.
  • the target contour delimiting device 1 further includes:
  • the recognition and labeling module 16 is used to call the stacking style sheet of the front-end device and label the key points on the image to be recognized according to the key positioning information.
  • the target contour delimiting device 1 further includes:
  • the cutting module 17 is used to call the drawing algorithm of the front-end device to cut the image enclosed by the recognition frame to generate the target image.
  • the target contour delimiting device 1 further includes:
  • the output module 18 is used to display the recognition result and target image on the front-end device.
  • This technical solution is applied to the field of artificial intelligence image detection, by calling the conversion tool of the front-end device to convert the image to be recognized into base64 encoding as the input parameter; calling the image recognition system in the background interface to calculate the input parameter to obtain the recognition result, Send the recognition result to the front-end device; call the contour conversion algorithm of the front-end device to calculate the contour coordinates to obtain the contour positioning information; call the stacking style sheet of the front-end device and, according to the contour positioning information, perform the identification
  • the recognition frame is drawn on the image to realize the edge detection of the recognition target in the image to be recognized by the image recognition system based on artificial intelligence, thereby realizing the technical effect of image processing on the image.
  • the present application also provides a computer system, which includes a plurality of computer devices 5.
  • the components of the target contour delimiting device 1 in the third embodiment can be dispersed in different computer devices, and the computer devices can be executed Program smart phones, tablets, laptops, desktop computers, rack servers, blade servers, tower servers or cabinet servers (including independent servers, or server clusters composed of multiple application servers), etc.
  • the computer equipment in this embodiment at least includes but is not limited to: a memory 51 and a processor 52 that can be communicatively connected to each other through a system bus, as shown in FIG. 5. It should be pointed out that FIG. 5 only shows a computer device with components, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead.
  • the memory 51 (ie, readable storage medium) includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM), Read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disks, optical disks, etc.
  • the memory 51 may be an internal storage unit of a computer device, such as a hard disk or memory of the computer device.
  • the memory 51 may also be an external storage device of the computer device, such as a plug-in hard disk equipped on the computer device, a smart memory card (Smart Media Card, SMC), Secure Digital (SD) card, Flash Card, etc.
  • the memory 51 may also include both the internal storage unit of the computer device and its external storage device.
  • the memory 51 is generally used to store an operating system and various application software installed in a computer device, such as the program code of the target contour delimiting device in the third embodiment, and so on.
  • the memory 51 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 52 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips in some embodiments.
  • the processor 52 is generally used to control the overall operation of the computer equipment.
  • the processor 52 is used to run the program code or process data stored in the memory 51, for example, to run a target contour demarcation device, so as to implement the target contour demarcation methods of the first and second embodiments.
  • the present application also provides a computer-readable storage medium, which includes multiple storage media.
  • the computer-readable storage medium may be non-volatile or volatile, such as flash memory, hard disk, multimedia Card, card-type memory (for example, SD or DX memory, etc.), random access memory (RAM), static random access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), Programmable read-only memory (PROM), magnetic memory, magnetic disks, optical disks, servers, App application malls, etc., have computer programs stored thereon, and corresponding functions are realized when the programs are executed by the processor 52.
  • the computer-readable storage medium of this embodiment is used to store the target contour demarcation device, and when executed by the processor 52, the target contour demarcation method of the first embodiment and the second embodiment is realized.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

一种目标轮廓圈定方法、装置、计算机系统及可读存储介质,方法包括:接收待识别图像,调用前端设备的转换工具将待识别图像转为base64编码作为入参(S101);调用后台接口中的图像识别系统计算入参获得识别结果,将识别结果发送至前端设备(S102);调用前端设备的轮廓转换算法,计算轮廓坐标获得轮廓定位信息(S103);调用前端设备的层叠样式表并根据轮廓定位信息,在待识别图像上绘制识别框(S105)。上述方法解决了当前存在的系统开发人员无法获知图像识别系统对图像中哪些部分进行了识别的问题,为开发人员提供了图像识别系统的工作过程,使开发人员能够根据该工作过程对图像识别系统进行优化和调整。

Description

目标轮廓圈定方法、装置、计算机系统及可读存储介质
本申请要求于2020年6月24日递交的申请号为CN 202010587639.X、名称为“目标轮廓圈定方法、装置、计算机系统及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种目标轮廓圈定方法、装置、计算机系统及可读存储介质。其使用到人工智能的计算机视觉技术。
背景技术
由于当前的人脸识别系统通常是对接收到的图像进行运算并直接输出相应的判断结果,但是,发明人意识到,系统开发人员无法获知人脸识别系统对图像中哪些部分进行了识别,导致人脸识别系统成为一种“黑箱”般的存在,造成开发人员无法获知人脸识别系统的工作过程,即,无法获知图像识别系统对图像中哪些部分进行了识别,进而无法对人脸识别系统进行准确的调整。
发明内容
本申请的目的是提供一种目标轮廓圈定方法、装置、计算机系统及可读存储介质,用于解决现有技术存在的无法获知图像识别系统对图像中哪些部分进行了识别的问题;本申请可应用于智慧政务场景中,从而推动智慧城市的建设。
为实现上述目的,本申请提供一种目标轮廓圈定方法,包括:
接收待识别图像,调用前端设备的转换工具将所述待识别图像转为base64编码作为入参;
调用后台接口中的图像识别系统计算所述入参获得识别结果,将所述识别结果发送至前端设备;其中,所述识别结果包括表征图像中目标区域轮廓的轮廓坐标;
调用所述前端设备的轮廓转换算法,计算所述轮廓坐标获得轮廓定位信息;
调用前端设备的层叠样式表并根据所述轮廓定位信息,在所述待识别图像上绘制识别框。
为实现上述目的,本申请还提供一种目标轮廓圈定装置,包括:
图像入参模块,用于接收待识别图像,调用前端设备的转换工具将所述待识别图像转为base64编码作为入参;
入参计算模块,用于调用后台接口中的图像识别系统计算所述入参获得识别结果,将所述识别结果发送至前端设备;其中,所述识别结果包括表征图像中目标区域轮廓的轮廓坐标;
轮廓定位模块,用于调用所述前端设备的轮廓转换算法,计算所述轮廓坐标获得轮廓定位信息;
识别绘制模块,用于调用前端设备的层叠样式表并根据所述轮廓定位信息,在所述待识别图像上绘制识别框。
为实现上述目的,本申请还提供一种计算机系统,其包括多个计算机设备,各计算机设备包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机程序,所述多个计算机设备的处理器执行所述计算机程序时共同实现上述目标轮廓圈定方法;目标轮廓圈定方法,包括:
接收待识别图像,调用前端设备的转换工具将所述待识别图像转为base64编码作为入参;
调用后台接口中的图像识别系统计算所述入参获得识别结果,将所述识别结果发送至前端设备;其中,所述识别结果包括表征图像中目标区域轮廓的轮廓坐标;
调用所述前端设备的轮廓转换算法,计算所述轮廓坐标获得轮廓定位信息;
调用前端设备的层叠样式表并根据所述轮廓定位信息,在所述待识别图像上绘制识别框。
为实现上述目的,本申请还提供一种计算机可读存储介质,其包括多个存储介质,各存储介质上存储有计算机程序,所述多个存储介质存储的所述计算机程序被处理器执行时共同实现上述目标轮廓圈定方法;
目标轮廓圈定方法,包括:
接收待识别图像,调用前端设备的转换工具将所述待识别图像转为base64编码作为入参;
调用后台接口中的图像识别系统计算所述入参获得识别结果,将所述识别结果发送至前端设备;其中,所述识别结果包括表征图像中目标区域轮廓的轮廓坐标;
调用所述前端设备的轮廓转换算法,计算所述轮廓坐标获得轮廓定位信息;
调用前端设备的层叠样式表并根据所述轮廓定位信息,在所述待识别图像上绘制识别框。
本申请提供的目标轮廓圈定方法、装置、计算机系统及可读存储介质,通过层叠样式表根据轮廓定位信息在待识别图像上绘制识别框,由于轮廓定位信息是根据识别结果的轮廓坐标生成的,因此根据轮廓定位信息绘制的识别框所圈定的区域,能够在待识别图像上准确反映出图像识别系统的识别目标,解决了当前存在的系统开发人员无法获知图像识别系统对图像中哪些部分进行了识别的问题,为开发人员提供了图像识别系统的工作过程,使开发人员能够根据该工作过程对所述图像识别系统进行优化和调整。
附图说明
图1为本申请目标轮廓圈定方法实施例一的流程图;
图2为本申请目标轮廓圈定方法实施例二中目标轮廓圈定方法的环境应用示意图;
图3是本申请目标轮廓圈定方法实施例二中目标轮廓圈定方法的具体方法流程图;
图4为本申请目标轮廓圈定装置实施例三的程序模块示意图;
图5为本申请计算机系统实施例四中计算机设备的硬件结构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请提供的目标轮廓圈定方法、装置、计算机系统及可读存储介质,适用于人工智能的图像检测技术领域,为提供一种基于图像入参模块、入参计算模块、轮廓定位模块和识别绘制模块的目标轮廓圈定方法。本申请通过调用前端设备的转换工具将所述待识别图像转为base64编码作为入参;调用后台接口中的图像识别系统计算所述入参获得识别结果,将所述识别结果发送至前端设备;调用所述前端设备的轮廓转换算法,计算所述轮廓坐标获得轮廓定位信息;调用前端设备的层叠样式表并根据所述轮廓定位信息,在所述待识别图像上绘制识别框。
实施例一:
请参阅图1,本实施例的一种目标轮廓圈定方法,应用在具有认证程序的认证服务器中,包括:
S101:接收待识别图像,调用前端设备的转换工具将所述待识别图像转为base64编码作为入参;
S102:调用后台接口中的图像识别系统计算所述入参获得识别结果,将所述识别结果发送至前端设备;其中,所述识别结果包括表征图像中目标区域轮廓的轮廓坐标;
S103:调用所述前端设备的轮廓转换算法,计算所述轮廓坐标获得轮廓定位信息;
S105:调用前端设备的层叠样式表并根据所述轮廓定位信息,在所述待识别图像上绘制识别框。
在示例性的实施例中,通过调用前端设备的转换将待识别图像转为base64编码,将base64编码作后台接口中图像识别系统的入参,以便于图像识别系统对该编码所对应的图像中目标区域进行识别;通过调用后台接口中的图像识别系统,对所述入参进行计算,并获得用户表征所述识别目标的识别结果,实现图像中目标识别的技术效果;
通过轮廓转换算法计算识别结果中的轮廓坐标,使其成为层叠样式表能够识别并处理的轮廓定位信息,以便于在前端设备上的待识别图像上通过绘制轮廓线的形式圈定识别目标,解决了图像识别系统输出的数据,无法在待识别图像上直观的展示和圈定出识别目标的问题;
本步骤通过前端设备的层叠样式表根据轮廓定位信息在待识别图像上绘制识别框,由于轮廓定位信息是根据识别结果的轮廓坐标生成的,因此根据轮廓定位信息绘制的识别框所圈定的区域,能够在待识别图像上准确反映出图像识别系统的识别目标;
因此,解决了当前存在的系统开发人员无法获知图像识别系统对图像中哪些部分进行了识别的问题,为开发人员提供了图像识别系统的工作过程,使开发人员能够根据该工作过程对所述图像识别系统进行优化和调整。
本申请可应用于智慧政务场景中,从而推动智慧城市的建设。
实施例二:
本实施例为上述实施例一的一种具体应用场景,通过本实施例,能够更加清楚、具体地阐述本申请所提供的方法。
下面,以在运行有目标轮廓圈定方法的服务器中,获得待识别图像中的识别目标并绘制识别框, 为例,来对本实施例提供的方法进行具体说明。需要说明的是,本实施例只是示例性的,并不限制本申请实施例所保护的范围。
图2示意性示出了根据本申请实施例二的目标轮廓圈定方法的环境应用示意图。
在示例性的实施例中,目标轮廓圈定方法所在的服务器2通过网络分别前端设备3和后台接口4,所述服务器2可以通过一个或多个网络提供服务,网络可以包括各种网络设备,例如路由器,交换机,多路复用器,集线器,调制解调器,网桥,中继器,防火墙,代理设备和/或等等。网络可以包括物理链路,例如同轴电缆链路,双绞线电缆链路,光纤链路,它们的组合和/或类似物。网络3可以包括无线链路,例如蜂窝链路,卫星链路,Wi-Fi链路和/或类似物;所述前端设备3可为智能手机、平板电脑、笔记本电脑、台式电脑等计算机设备,所述后台接口4可为运行有图像识别系统的服务器所提供的API接口。
图3是本申请一个实施例提供的一种目标轮廓圈定方法的具体方法流程图,该方法具体包括步骤S201至S208。
S201:接收待识别图像,调用前端设备的转换工具将所述待识别图像转为base64编码作为入参。
为便于运行有图像识别系统的后台接口识别图像中的目标区域,本步骤通过调用前端设备的转换将待识别图像转为base64编码,将base64编码作后台接口中图像识别系统的入参,以便于图像识别系统对该编码所对应的图像中目标区域进行识别,其中,目标区域是指图像识别系统的识别目标,例如:人脸、特定物品等。
需要说明的是,入参(Input Parameter)与变量的定义方式一致,其不会用到的参数可以省略变量名称。Base64编码是一种可以将任意的字节数组数据,通过算法,生成只有(大小写英文、数字、+、/)(一共64个字符)内容表示的字符串数据,即将任意的内容转换为可见的字符串形式。
S202:调用后台接口中的图像识别系统计算所述入参获得识别结果,将所述识别结果发送至前端设备;其中,所述识别结果包括表征图像中目标区域轮廓的轮廓坐标。
可选的,所述识别结果还包括表征所述目标区域中关键点的关键坐标。
本步骤中,所述目标区域轮廓为图像识别系统的识别目标所在区域的轮廓,例如:人脸、特定物品的区域。所述关键点是表征所述识别目标具有标志性特征的区域或部位,例如:若识别目标为人脸,那么关键点可为眼睛、鼻尖、嘴角。
需要说明的是,通过图像识别系统识别图像中的目标及其关键点,属于图像识别系统的现有技术,本申请所解决的问题是如何将图像识别系统的识别结果展现在待识别图像上,因此,关于图像识别系统识别目标获得目标区域轮廓及关键点的技术原理在本申请中不再赘述。
为获得待识别图像的识别目标,本步骤通过调用后台接口中的图像识别系统,对所述入参进行计算,并获得用户表征所述识别目标的识别结果,实现图像中目标识别的技术效果。
于本实施例中,所述目标区域轮廓为矩形,所述轮廓坐标包括所述矩形左上角顶点的坐标,以及该矩形右下角顶点的坐标。
示例性地,获得的识别结果可如下所示;
{
  "isEncrypted": "0",
  "data": [
    {
      "bboxes": {
        "y01": 66,
        "y02": 239,
        "x01": 248,
        "x02": 374
      },/这部分为轮廓坐标,其中,X01和Y01为目标区域轮廓左上角顶点的坐标,X02和Y02为目标区域轮廓右下角顶点的坐标/
      "points": {
        "y1": 138,
        "x1": 278,
        "y2": 136,
        "y3": 173,
        "x2": 335,
        "x3": 303,
        "y4": 193,
        "x4": 281,
        "y5": 191,
        "x5": 337
      }/这部分为关键点坐标,其中(X1,Y1),(X2,Y2),(X3,Y3),(X4,Y4),(X5,Y5)分别为目标区域中关键点的关键坐标/
    }
  ],
  "code": "000000",
  "msg": "success"
}
S203:调用所述前端设备的轮廓转换算法,计算所述轮廓坐标获得轮廓定位信息。
由于通过后台接口的图像识别系统输出的数据,无法在待识别图像上直观的展示和圈定出识别目标,因此,本步骤通过轮廓转换算法计算识别结果中的轮廓坐标,使其成为层叠样式表能够识别并处理的轮廓定位信息,以便于在前端设备上的待识别图像上通过绘制轮廓线的形式圈定识别目标。
在示例性的实施例中,所述目标区域轮廓为矩形,所述轮廓坐标包括所述矩形左上角顶点的坐标,以及该矩形右下角顶点的坐标,将所述矩形左上角顶点的坐标作为起点坐标(X01,Y01),将所述矩形右下角顶点的坐标作为终点坐标(X02,Y02);调用轮廓转换算法计算轮廓坐标获得轮廓定位信息,所述轮廓定位信息包括左边距left、上边距top、框宽width和框高height。
其中,左边距left是指距离祖先元素左边的距离值,上边距top是指距离祖先元素上面的距离值,框宽width是指目标区域轮廓(于本实施例中,指人脸框)的宽度,框高height:是指目标区域轮廓(于本实施例中,指人脸框)的高度。
具体的,调用轮廓转换算法中的目标公式,如下:
left =x01;  top = y01;   width = x02-x01;   height = y02-y01
其中,(X01,Y01)为起点坐标,(X02,Y02)为终点坐标。
基于上述举例,最终获得左边距left为248,上边距top为66,框宽width为126,框高height为173。
S204:调用所述前端设备的关键转换算法,将所述关键坐标转为关键定位信息。
由于通过后台接口的图像识别系统输出的数据,无法在待识别图像上直观的展示出识别出的关键点,因此,本步骤通过关键转换算法计算识别结果中的关键坐标,使其成为层叠样式表能够识别并处理的关键定位信息,以便于在前端设备上的待识别图像上通过标注关键点的形式标注识别目标具有标志性特征的区域或部位。
在示例性的实施例中,所述关键坐标为所述关键点左上角顶点像素的坐标,调用关键转换算法计算关键坐标获得轮廓定位信息,所述关键定位信息包括左边距left、上边距top、框宽width和框高height。其中,左边距left是指距离祖先元素左边的距离值,上边距top是指距离祖先元素上面的距离值,框宽width是指目标区域轮廓(于本实施例中,指人脸框)的宽度,框高height:是指目标区域轮廓(于本实施例中,指人脸框)的高度。
具体的,调用关键转换算法中的目标公式,如下:
left =x;  top = y;   width = A;   height =B
其中,x为关键点横坐标,y为关键点纵坐标,A和B分别为预设的常数,于本实施例中,A和B分别为4px。
基于上述举例,获得如下关键定位信息:
坐标为(x1,y1)的关键点的关键定位信息为,左边距left为278,上边距top为138,框宽width为4,框高height为4;
坐标为(x2,y2)的关键点的关键定位信息为,左边距left为335,上边距top为136,框宽width为4,框高height为4;
坐标为(x3,y3)的关键点的关键定位信息为,左边距left为303,上边距top为173,框宽width为4,框高height为4;
坐标为(x4,y4)的关键点的关键定位信息为,左边距left为281,上边距top为193,框宽width为4,框高height为4;
坐标为(x1,y1)的关键点的关键定位信息为,左边距left为337,上边距top为191,框宽width为4,框高height为4。
S205:调用前端设备的层叠样式表并根据所述轮廓定位信息,在所述待识别图像上绘制识别框。
为使前端设备能够在待识别图像上展示图像识别系统的识别目标,本步骤通过前端设备的层叠样式表根据轮廓定位信息在待识别图像上绘制识别框,由于轮廓定位信息是根据识别结果的轮廓坐标生成的,因此根据轮廓定位信息绘制的识别框所圈定的区域,能够在待识别图像上准确反映出图像识别系统的识别目标。
示例性地,调用层叠样式表根据轮廓定位信息(包括:左边距left为248,上边距top为66,框宽width为126,框高height为173),在待识别图像上绘制相应的识别框。
S206:调用前端设备的层叠样式表并根据所述关键定位信息,在所述待识别图像上标注关键点。
为使前端设备能够在待识别图像上展示图像识别系统在识别目标中识别出的关键点,本步骤通过前端设备的层叠样式表根据关键定位信息在待识别图像上标注关键点,由于关键定位信息是根据识别结果的关键坐标生成的,因此根据关键定位信息绘制的识别框所圈定的区域,能够在待识别图像上准确反映出图像识别系统的识别出的关键点。
示例性地,调用层叠样式表根据关键定位信息(为基于上述举例中,坐标为(x1,y1)、(x2,y2)、(x3,y3)、(x4,y4)、(x5,y5)的关键定位信息),在待识别图像上填充颜色,以标注相应的关键点。
需要说明的是,虽然从常识角度上来说,通过“坐标”应当能够获得图像中的识别目标,但是,坐标是具有参考系的(单位不同,如:厘米/像素;或基准点不同,如,前端图像左下基准点为(10,50),后台接口图像左下基准点为(0,0)位置不同,)在不同的参考系下,仅通过“坐标”是无法在前端上圈定正确的标识框的,因此,本申请通过轮廓转换算法、关键转换算法、获得识别目标的“绝对定位”信息,即:表征识别框和关键点的绝对定位的轮廓定位信息和关键定位信息,通过层叠样式表并根据该“绝对定位”信息,保证了即使后台接口和前端设备在不同的参考系下,仍能够在待识别图像上正确的圈定识别目标的轮廓的技术效果。
S207:调用前端设备的制图算法剪切所述识别框圈定的图像生成目标图像。
为便于使用者观察图像识别系统的识别目标,本步骤通过调用制图算法剪切所述识别框圈定的图像并生成目标图像,以便于使用者仅需观察获得的目标图像即可,并且,还帮助了图像识别系统的开发者直观判断图像识别系统的识别准确度是否符合精度要求。
示例性地,将轮廓定位信息转换为剪切信息,其包括:将轮廓定位信息的左边距left设为sx,即:开始剪切的 x 坐标位置。将轮廓定位信息的上边距top设为sy,即:开始剪切的 y 坐标位置。将轮廓定位信息的框宽width设为swidth    ,即:被剪切图像的宽度。将轮廓定位信息的框高height设为sheight,即:被剪切图像的高度。获得预设的参数x,即:在画布上放置图像的 x 坐标位置,于本实施例中,x可设为0。获得预设的参数y  ,即:在画布上放置图像的 y 坐标位置,于本实施例中,y可设为0。获得预设的参数width  ,即:要使用的图像的宽度,于本实施例中,参数width可设为90px。获得预设的参数height ,即:要使用的图像的高度,于本实施例中,参数参数height可设为90px。
采用html5 的canvas drawImage()方法作为制图方法,将所述剪切信息录入该制图方法中,获得context.drawImage(img,sx,sy,swidth,sheight,x,y,width,height),并执行所述制图方法得到相应的目标图像。
所述调用前端设备的制图算法剪切所述识别框圈定的图像生成目标图像之后,还包括:
将所述目标图像上传至区块链中。
需要说明的是,基于目标图像得到对应的摘要信息,具体来说,摘要信息由目标图像进行散列处理得到,比如利用sha256s算法处理得到。将摘要信息上传至区块链可保证其安全性和对用户的公正透明性。用户设备可以从区块链中下载得该摘要信息,以便查证目标图像是否被篡改。本示例所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。
S208:将所述识别结果和目标图像展示在所述前端设备上。
优选的,将绘制有识别框和/或标注有关键点的待识别图像,与所述目标图像和识别结果展示在同一展示界面上,以便于使用者同时观察待识别图像。
实施例三:
请参阅图4,本实施例的一种目标轮廓圈定装置1,包括:
图像入参模块11,用于接收待识别图像,调用前端设备的转换工具将所述待识别图像转为base64编码作为入参;
入参计算模块12,用于调用后台接口中的图像识别系统计算所述入参获得识别结果,将所述识别结果发送至前端设备;其中,所述识别结果包括表征图像中目标区域轮廓的轮廓坐标;
轮廓定位模块13,用于调用所述前端设备的轮廓转换算法,计算所述轮廓坐标获得轮廓定位信息;
识别绘制模块15,用于调用前端设备的层叠样式表并根据所述轮廓定位信息,在所述待识别图像上绘制识别框。
可选的,所述目标轮廓圈定装置1,还包括:
关键定位模块14,用于调用所述前端设备的关键转换算法,将所述关键坐标转为关键定位信息。
可选的,所述目标轮廓圈定装置1,还包括:
识别标注模块16,用于调用前端设备的层叠样式表并根据所述关键定位信息,在所述待识别图像上标注关键点。
可选的,所述目标轮廓圈定装置1,还包括:
剪切模块17,用于调用前端设备的制图算法剪切所述识别框圈定的图像生成目标图像。
可选的,所述目标轮廓圈定装置1,还包括:
输出模块18,用于将所述识别结果和目标图像展示在所述前端设备上。
本技术方案应用于人工智能的图像检测领域,通过调用前端设备的转换工具将所述待识别图像转为base64编码作为入参;调用后台接口中的图像识别系统计算所述入参获得识别结果,将所述识别结果发送至前端设备;调用所述前端设备的轮廓转换算法,计算所述轮廓坐标获得轮廓定位信息;调用前端设备的层叠样式表并根据所述轮廓定位信息,在所述待识别图像上绘制识别框,以实现基于人工智能的图像识别系统对待识别图像中的识别目标进行边缘检测,进而实现对图像进行图像处理的技术效果。
实施例四:
为实现上述目的,本申请还提供一种计算机系统,该计算机系统包括多个计算机设备5,实施例三的目标轮廓圈定装置1的组成部分可分散于不同的计算机设备中,计算机设备可以是执行程序的智能手机、平板电脑、笔记本电脑、台式计算机、机架式服务器、刀片式服务器、塔式服务器或机柜式服务器(包括独立的服务器,或者多个应用服务器所组成的服务器集群)等。本实施例的计算机设备至少包括但不限于:可通过系统总线相互通信连接的存储器51、处理器52,如图5所示。需要指出的是,图5仅示出了具有组件-的计算机设备,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
本实施例中,存储器51(即可读存储介质)包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,存储器51可以是计算机设备的内部存储单元,例如该计算机设备的硬盘或内存。在另一些实施例中,存储器51也可以是计算机设备的外部存储设备,例如该计算机设备上配备的插接式硬盘,智能存储卡(Smart Media Card, SMC),安全数字(Secure Digital, SD)卡,闪存卡(Flash Card)等。当然,存储器51还可以既包括计算机设备的内部存储单元也包括其外部存储设备。本实施例中,存储器51通常用于存储安装于计算机设备的操作系统和各类应用软件,例如实施例三的目标轮廓圈定装置的程序代码等。此外,存储器51还可以用于暂时地存储已经输出或者将要输出的各类数据。
处理器52在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器52通常用于控制计算机设备的总体操作。本实施例中,处理器52用于运行存储器51中存储的程序代码或者处理数据,例如运行目标轮廓圈定装置,以实现实施例一和实施例二的目标轮廓圈定方法。
实施例五:
为实现上述目的,本申请还提供一种计算机可读存储介质,其包括多个存储介质,所述计算机可读存储介质可以是非易失性,也可以是易失性,如闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘、服务器、App应用商城等等,其上存储有计算机程序,程序被处理器52执行时实现相应功能。本实施例的计算机可读存储介质用于存储目标轮廓圈定装置,被处理器52执行时实现实施例一和实施例二的目标轮廓圈定方法。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种目标轮廓圈定方法,其中,包括:
    接收待识别图像,调用前端设备的转换工具将所述待识别图像转为base64编码作为入参;
    调用后台接口中的图像识别系统计算所述入参获得识别结果,将所述识别结果发送至前端设备;其中,所述识别结果包括表征图像中目标区域轮廓的轮廓坐标;
    调用所述前端设备的轮廓转换算法,计算所述轮廓坐标获得轮廓定位信息;
    调用前端设备的层叠样式表并根据所述轮廓定位信息,在所述待识别图像上绘制识别框。
  2. 根据权利要求1所述的目标轮廓圈定方法,其中,所述目标区域轮廓为图像识别系统的识别目标所在区域的轮廓;所述目标区域轮廓为矩形,所述轮廓坐标包括所述矩形左上角顶点的坐标,以及该矩形右下角顶点的坐标。
  3. 根据权利要求1所述的目标轮廓圈定方法,其中,所述计算所述轮廓坐标获得轮廓定位信息包括:
    通过轮廓转换算法计算识别结果中的轮廓坐标,使其成为层叠样式表能够识别并处理的轮廓定位信息。
  4. 根据权利要求1所述的目标轮廓圈定方法,其中,所述识别结果还包括表征所述目标区域中关键点的关键坐标,所述关键点是表征所述识别目标具有标志性特征的区域或部位;
    将所述识别结果发送至前端设备之后,还包括:
    调用所述前端设备的关键转换算法,将所述关键坐标转为关键定位信息。
  5. 根据权利要求4所述的目标轮廓圈定方法,其中,所述将所述关键坐标转为关键定位信息包括:
    通过关键转换算法计算识别结果中的关键坐标,使其成为层叠样式表能够识别并处理的关键定位信息。
  6. 根据权利要求4所述的目标轮廓圈定方法,其中,所述将所述关键坐标转为关键定位信息之后,包括:
    调用前端设备的层叠样式表并根据所述关键定位信息,在所述待识别图像上标注关键点。
  7. 根据权利要求1所述的目标轮廓圈定方法,其中,在所述待识别图像上绘制识别框之后,包括:
    调用前端设备的制图算法剪切所述识别框圈定的图像生成目标图像;
    将所述识别结果和目标图像展示在所述前端设备上;
    所述调用前端设备的制图算法剪切所述识别框圈定的图像生成目标图像之后,还包括:
    将所述目标图像上传至区块链中。
  8. 一种目标轮廓圈定装置,其中,包括:
    图像入参模块,用于接收待识别图像,调用前端设备的转换工具将所述待识别图像转为base64编码作为入参;
    入参计算模块,用于调用后台接口中的图像识别系统计算所述入参获得识别结果,将所述识别结果发送至前端设备;其中,所述识别结果包括表征图像中目标区域轮廓的轮廓坐标;
    轮廓定位模块,用于调用所述前端设备的轮廓转换算法,计算所述轮廓坐标获得轮廓定位信息;
    识别绘制模块,用于调用前端设备的层叠样式表并根据所述轮廓定位信息,在所述待识别图像上绘制识别框。
  9. 一种计算机系统,其包括多个计算机设备,各计算机设备包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机程序,其中,所述多个计算机设备的处理器执行所述计算机程序时实现所述目标轮廓圈定方法,所述目标轮廓圈定方法的步骤,包括:
    接收待识别图像,调用前端设备的转换工具将所述待识别图像转为base64编码作为入参;
    调用后台接口中的图像识别系统计算所述入参获得识别结果,将所述识别结果发送至前端设备;其中,所述识别结果包括表征图像中目标区域轮廓的轮廓坐标;
    调用所述前端设备的轮廓转换算法,计算所述轮廓坐标获得轮廓定位信息;
    调用前端设备的层叠样式表并根据所述轮廓定位信息,在所述待识别图像上绘制识别框。
  10. 根据权利要求9所述的计算机系统,其中,所述目标区域轮廓为图像识别系统的识别目标所在区域的轮廓;所述目标区域轮廓为矩形,所述轮廓坐标包括所述矩形左上角顶点的坐标,以及该矩形右下角顶点的坐标;
    所述计算所述轮廓坐标获得轮廓定位信息包括:
    通过轮廓转换算法计算识别结果中的轮廓坐标,使其成为层叠样式表能够识别并处理的轮廓定位信息。
  11. 根据权利要求9所述的计算机系统,其中,所述识别结果还包括表征所述目标区域中关键点的关键坐标,所述关键点是表征所述识别目标具有标志性特征的区域或部位;
    将所述识别结果发送至前端设备之后,还包括:
    调用所述前端设备的关键转换算法,将所述关键坐标转为关键定位信息。
  12. 根据权利要求11所述的计算机系统,其中,所述将所述关键坐标转为关键定位信息包括:
    通过关键转换算法计算识别结果中的关键坐标,使其成为层叠样式表能够识别并处理的关键定位信息。
  13. 根据权利要求11所述的计算机系统,其中,所述将所述关键坐标转为关键定位信息之后,包括:
    调用前端设备的层叠样式表并根据所述关键定位信息,在所述待识别图像上标注关键点。
  14. 根据权利要求9所述的计算机系统,其中,在所述待识别图像上绘制识别框之后,包括:
    调用前端设备的制图算法剪切所述识别框圈定的图像生成目标图像;
    将所述识别结果和目标图像展示在所述前端设备上;
    所述调用前端设备的制图算法剪切所述识别框圈定的图像生成目标图像之后,还包括:
    将所述目标图像上传至区块链中。
  15. 一种计算机可读存储介质,其包括多个存储介质,各存储介质上存储有计算机程序,其中,所述多个存储介质存储的所述计算机程序被处理器执行时实现所述目标轮廓圈定方法,所述目标轮廓圈定方法的步骤,包括:
    接收待识别图像,调用前端设备的转换工具将所述待识别图像转为base64编码作为入参;
    调用后台接口中的图像识别系统计算所述入参获得识别结果,将所述识别结果发送至前端设备;其中,所述识别结果包括表征图像中目标区域轮廓的轮廓坐标;
    调用所述前端设备的轮廓转换算法,计算所述轮廓坐标获得轮廓定位信息;
    调用前端设备的层叠样式表并根据所述轮廓定位信息,在所述待识别图像上绘制识别框。
  16. 根据权利要求15所述的计算机可读存储介质,其中,所述目标区域轮廓为图像识别系统的识别目标所在区域的轮廓;所述目标区域轮廓为矩形,所述轮廓坐标包括所述矩形左上角顶点的坐标,以及该矩形右下角顶点的坐标;
    所述计算所述轮廓坐标获得轮廓定位信息包括:
    通过轮廓转换算法计算识别结果中的轮廓坐标,使其成为层叠样式表能够识别并处理的轮廓定位信息。
  17. 根据权利要求15所述的计算机可读存储介质,其中,所述识别结果还包括表征所述目标区域中关键点的关键坐标,所述关键点是表征所述识别目标具有标志性特征的区域或部位;
    将所述识别结果发送至前端设备之后,还包括:
    调用所述前端设备的关键转换算法,将所述关键坐标转为关键定位信息。
  18. 根据权利要求17所述的计算机可读存储介质,其中,所述将所述关键坐标转为关键定位信息包括:
    通过关键转换算法计算识别结果中的关键坐标,使其成为层叠样式表能够识别并处理的关键定位信息。
  19. 根据权利要求17所述的计算机可读存储介质,其中,所述将所述关键坐标转为关键定位信息之后,包括:
    调用前端设备的层叠样式表并根据所述关键定位信息,在所述待识别图像上标注关键点。
  20. 根据权利要求15所述的计算机可读存储介质,其中,在所述待识别图像上绘制识别框之后,包括:
    调用前端设备的制图算法剪切所述识别框圈定的图像生成目标图像;
    将所述识别结果和目标图像展示在所述前端设备上;
    所述调用前端设备的制图算法剪切所述识别框圈定的图像生成目标图像之后,还包括:
    将所述目标图像上传至区块链中。
PCT/CN2021/096666 2020-06-24 2021-05-28 目标轮廓圈定方法、装置、计算机系统及可读存储介质 WO2021258991A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010587639.XA CN111738166B (zh) 2020-06-24 2020-06-24 目标轮廓圈定方法、装置、计算机系统及可读存储介质
CN202010587639.X 2020-06-24

Publications (1)

Publication Number Publication Date
WO2021258991A1 true WO2021258991A1 (zh) 2021-12-30

Family

ID=72652052

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/096666 WO2021258991A1 (zh) 2020-06-24 2021-05-28 目标轮廓圈定方法、装置、计算机系统及可读存储介质

Country Status (2)

Country Link
CN (1) CN111738166B (zh)
WO (1) WO2021258991A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111738166B (zh) * 2020-06-24 2024-03-01 平安科技(深圳)有限公司 目标轮廓圈定方法、装置、计算机系统及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080193020A1 (en) * 2005-02-21 2008-08-14 Mitsubishi Electric Coporation Method for Facial Features Detection
CN102163282A (zh) * 2011-05-05 2011-08-24 汉王科技股份有限公司 掌纹图像感兴趣区域的获取方法及装置
CN102880877A (zh) * 2012-09-28 2013-01-16 中科院成都信息技术有限公司 一种基于轮廓特征的目标识别方法
CN109933638A (zh) * 2019-03-19 2019-06-25 腾讯科技(深圳)有限公司 基于电子地图的目标区域轮廓确定方法、装置及存储介质
CN111738166A (zh) * 2020-06-24 2020-10-02 平安科技(深圳)有限公司 目标轮廓圈定方法、装置、计算机系统及可读存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3735893B2 (ja) * 1995-06-22 2006-01-18 セイコーエプソン株式会社 顔画像処理方法および顔画像処理装置
JP4998637B1 (ja) * 2011-06-07 2012-08-15 オムロン株式会社 画像処理装置、情報生成装置、画像処理方法、情報生成方法、制御プログラムおよび記録媒体
CN106778585B (zh) * 2016-12-08 2019-04-16 腾讯科技(上海)有限公司 一种人脸关键点跟踪方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080193020A1 (en) * 2005-02-21 2008-08-14 Mitsubishi Electric Coporation Method for Facial Features Detection
CN102163282A (zh) * 2011-05-05 2011-08-24 汉王科技股份有限公司 掌纹图像感兴趣区域的获取方法及装置
CN102880877A (zh) * 2012-09-28 2013-01-16 中科院成都信息技术有限公司 一种基于轮廓特征的目标识别方法
CN109933638A (zh) * 2019-03-19 2019-06-25 腾讯科技(深圳)有限公司 基于电子地图的目标区域轮廓确定方法、装置及存储介质
CN111738166A (zh) * 2020-06-24 2020-10-02 平安科技(深圳)有限公司 目标轮廓圈定方法、装置、计算机系统及可读存储介质

Also Published As

Publication number Publication date
CN111738166A (zh) 2020-10-02
CN111738166B (zh) 2024-03-01

Similar Documents

Publication Publication Date Title
CN112052850B (zh) 车牌识别方法、装置、电子设备及存储介质
CN107463348B (zh) 基于B/S架构实现Web端自定义格式打印方法及系统
CN111131221B (zh) 接口校验的装置、方法及存储介质
WO2021147221A1 (zh) 文本识别方法、装置、电子设备及存储介质
WO2021212873A1 (zh) 证件四角残缺检测方法、装置、设备及存储介质
CN108269062A (zh) 基于h5的电子合同制作方法、装置、设备及介质
US20170039361A1 (en) Method And Device For Fuzzily Inputting Password
WO2022252675A1 (zh) 道路标注生成方法、装置、设备以及存储介质
CN112668575B (zh) 关键信息提取方法、装置、电子设备及存储介质
WO2022126978A1 (zh) 发票信息抽取方法、装置、计算机设备及存储介质
WO2021174786A1 (zh) 训练样本制作方法、装置、计算机设备及可读存储介质
CN111880752A (zh) 印章打印方法、装置、电子设备及存储介质
CN113012075A (zh) 一种图像矫正方法、装置、计算机设备及存储介质
WO2021258991A1 (zh) 目标轮廓圈定方法、装置、计算机系统及可读存储介质
WO2022174517A1 (zh) 一种人群计数方法、装置、计算机设备及存储介质
WO2020147244A1 (zh) 一种无人机管理方法、装置、计算机系统及可读存储介质
CN105005733B (zh) 字库初始化方法、字符显示方法及系统、智能密钥设备
CN112581344A (zh) 一种图像处理方法、装置、计算机设备及存储介质
CN112632249A (zh) 产品不同版本信息的展示方法、装置、计算机设备及介质
CN112528984A (zh) 图像信息抽取方法、装置、电子设备及存储介质
CN110765610A (zh) Pdm集成方法、装置、计算机设备及存储介质
WO2022105120A1 (zh) 图片文字检测方法、装置、计算机设备及存储介质
CN113763514B (zh) 笔顺动画的生成方法、装置、系统及电子设备
CN113268949B (zh) 基于动态字段的表格展示方法、装置、计算机设备及介质
CN113791425A (zh) 雷达p显界面生成方法、装置、计算机设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21828622

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21828622

Country of ref document: EP

Kind code of ref document: A1