CN112036396A - Ship name recognition method and device, electronic equipment and computer readable storage medium - Google Patents

Ship name recognition method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN112036396A
CN112036396A CN202010961767.6A CN202010961767A CN112036396A CN 112036396 A CN112036396 A CN 112036396A CN 202010961767 A CN202010961767 A CN 202010961767A CN 112036396 A CN112036396 A CN 112036396A
Authority
CN
China
Prior art keywords
ship name
ship
area
image
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010961767.6A
Other languages
Chinese (zh)
Other versions
CN112036396B (en
Inventor
任昊
徐博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Goldway Intelligent Transportation System Co Ltd
Original Assignee
Shanghai Goldway Intelligent Transportation System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Goldway Intelligent Transportation System Co Ltd filed Critical Shanghai Goldway Intelligent Transportation System Co Ltd
Priority to CN202010961767.6A priority Critical patent/CN112036396B/en
Publication of CN112036396A publication Critical patent/CN112036396A/en
Application granted granted Critical
Publication of CN112036396B publication Critical patent/CN112036396B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a ship name identification method, a ship name identification device, electronic equipment and a computer readable storage medium, which relate to the technical field of data processing, wherein image features of a sub-area where a text unit is located in a ship name area are extracted by determining the ship name area in an image containing a ship, wherein the text unit is as follows: and under the condition that at least two text units exist in the ship name area, splicing the image features of each sub-area according to the arrangement sequence of each text unit in the ship name area to obtain target features, and performing text recognition based on the target features to obtain a ship name recognition result. In the process of identifying the ship name, the target features obtained by splicing the image features of the sub-regions are the features of the whole ship name region, so that the ship name identification result obtained by text identification based on the target features is the identification result of the whole ship name, the whole identification of the ship name region is realized, and the ship name identification efficiency is improved.

Description

Ship name recognition method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and an apparatus for identifying a ship name, an electronic device, and a computer-readable storage medium.
Background
Since ancient times, waterway shipping is one of important transportation modes in human society, and the waterway shipping plays a certain role in promoting the development of economy. At present, with the continuous increase of the number of ships, the navigation density is higher and higher, and a series of events such as overload and transfinite transportation of the ships, illegal escape and the like occur occasionally, so that the identity information of the ships in the past needs to be known to realize the management of the ships in the waterway shipping. In view of the above, it is important to identify the ship name of the ship.
When the ship name comprises a plurality of lines of texts, the traditional ship name identification method needs to respectively identify each line of texts, and finally, the identification results of each line of texts are spliced to obtain the final ship name identification result. Therefore, in the prior art, when the ship name comprising a plurality of lines of texts is identified, each line of text is used as a unit for identification, so that the ship name identification can be completed only by carrying out identification for a plurality of times, and the identification efficiency is low.
Disclosure of Invention
The embodiment of the application aims to provide a ship name identification method, a ship name identification device, electronic equipment and a computer readable storage medium, so as to solve the problem of low ship name identification efficiency in the prior art. The specific technical scheme is as follows:
in a first aspect of an embodiment of the present application, an embodiment of the present application provides a ship name identification method, where the method includes:
determining a ship name area in an image containing a ship;
extracting image characteristics of a sub-region where a text unit is located in the ship name region, wherein the text unit is as follows: a character row or a character column;
under the condition that at least two text units exist in the ship name area, splicing the image features of each sub-area according to the arrangement sequence of each text unit in the ship name area to obtain a target feature; the arrangement sequence of each text unit in the ship name area is determined according to the position information of each text unit;
and performing text recognition based on the target characteristics to obtain a ship name recognition result.
Optionally, the position information includes a height and/or a width of the text unit in the ship name area; the step of determining the arrangement sequence of the text units in the ship name area comprises the following steps:
acquiring the ratio of the width of the ship name area to the height of the ship name area;
according to the ratio of the width of the ship name area to the height of the ship name area, and the corresponding relation between the preset ratio of the height of the ship name area to the width of the ship name area and the number of text units in the ship name area; determining the number of text units in the ship name area;
determining the number of lines or columns of each text unit according to the number of the text units in the ship name area and the position of each text unit in the ship name area;
and determining the arrangement sequence of each text unit in the ship name area according to the number of the lines or the columns where each text unit is located.
Optionally, when the number of the images is greater than 1, performing text recognition based on the target feature to obtain a ship name recognition result, including:
for each image, performing text recognition based on the target features corresponding to the image to obtain a target ship name corresponding to the image and a confidence coefficient of the target ship name; the image is taken between the bow of the radar detection to the stern of the radar detection;
and obtaining a ship name recognition result according to the confidence coefficient of each target ship name.
Optionally, the obtaining of the ship name recognition result according to the confidence of each target ship name includes:
determining the target ship name with the highest confidence coefficient as the ship name contained in the ship name recognition result;
or
And determining the same ship name in the target ship names corresponding to the images, performing weighted calculation on the confidence coefficient of the same ship name aiming at each same ship name to obtain the weighted confidence coefficient of the same ship name, and determining the same ship name with the highest weighted confidence coefficient as the ship name contained in the ship name recognition result.
Optionally, the determining a ship name region in the image including the ship includes:
inputting each image containing the ship into a pre-trained ship name area detection model, and detecting the image through the pre-trained ship name area detection model to obtain a ship name area; the pre-trained ship name area detection model is obtained by training based on a preset training set, and the preset training set comprises a sample image containing a ship; and adding a ship name area label to an area containing the ship name in each sample image, wherein the area comprises at least one text unit.
In a second aspect of embodiments of the present application, an embodiment of the present application provides a ship name recognition apparatus, including:
a determining module for determining a ship name area in an image containing a ship;
the extraction module is used for extracting the image characteristics of the sub-region where the text unit is located in the ship name region, wherein the text unit is as follows: a character row or a character column;
the splicing module is used for splicing the image characteristics of each sub-region according to the arrangement sequence of each text unit in the ship name region to obtain target characteristics under the condition that at least two text units exist in the ship name region; the arrangement sequence of each text unit in the ship name area is determined according to the position information of each text unit;
and the recognition module is used for performing text recognition based on the target characteristics to obtain a ship name recognition result.
Optionally, the position information includes a height and/or a width of the text unit in the ship name area;
the splicing module is specifically configured to:
acquiring the ratio of the width of the ship name area to the height of the ship name area;
according to the ratio of the width of the ship name area to the height of the ship name area, and the corresponding relation between the preset ratio of the height of the ship name area to the width of the ship name area and the number of text units in the ship name area; determining the number of text units in the ship name area;
determining the number of lines or columns of each text unit according to the number of the text units in the ship name area and the position of each text unit in the ship name area;
and determining the arrangement sequence of each text unit in the ship name area according to the number of the lines or the columns where each text unit is located.
Optionally, when the number of the images is greater than 1, the identification module includes:
the confidence coefficient submodule is used for carrying out text recognition on the basis of the target features corresponding to each image so as to obtain the target ship name corresponding to the image and the confidence coefficient of the target ship name; the image is taken between the bow of the radar detection to the stern of the radar detection;
and the processing submodule is used for obtaining a ship name recognition result according to the confidence coefficient of each target ship name.
Optionally, the processing sub-module is specifically configured to:
determining the target ship name with the highest confidence coefficient as the ship name contained in the ship name recognition result;
or
And determining the same ship name in the target ship names corresponding to the images, performing weighted calculation on the confidence coefficient of the same ship name aiming at each same ship name to obtain the weighted confidence coefficient of the same ship name, and determining the same ship name with the highest weighted confidence coefficient as the ship name contained in the ship name recognition result.
Optionally, the determining module is specifically configured to:
inputting each image containing the ship into a pre-trained ship name area detection model, and detecting the image through the pre-trained ship name area detection model to obtain a ship name area; the pre-trained ship name area detection model is obtained by training based on a preset training set, and the preset training set comprises a sample image containing a ship; and adding a ship name area label to an area containing the ship name in each sample image, wherein the area comprises at least one text unit.
In another aspect of the embodiments of the present application, an embodiment of the present application provides an electronic device, including: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the method for identifying a name of a ship according to any one of the first aspect when executing a program stored in a memory.
In a further aspect of the embodiments of the present application, the embodiments of the present application provide a computer-readable storage medium, in which a computer program is stored, and the computer program, when executed by a processor, implements any of the ship name identification methods described in any of the first aspects above.
In a further aspect of embodiments of the present application, there is provided a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method for identifying a name of a ship as described in any one of the first aspect above.
The ship name identification method, the ship name identification device, the electronic device, the computer-readable storage medium and the computer program product containing the instructions are provided by the embodiment of the application, wherein the image characteristics of the sub-area where the text unit is located in the ship name area are extracted by determining the ship name area in the image containing the ship, and the text unit is as follows: the character row or the character column is arranged according to the arrangement sequence of each text unit in the ship name area under the condition that at least two text units exist in the ship name area, wherein the arrangement sequence of each text unit in the ship name area is determined according to the position information of each text unit; and splicing the image characteristics of each subregion to obtain target characteristics, and performing text recognition based on the target characteristics to obtain a ship name recognition result. Therefore, in the process of identifying the ship name, the target features obtained by splicing the image features of the sub-regions are the features of the whole ship name region, so that the ship name identification result obtained by text identification based on the target features is the identification result of the whole ship name, the whole identification of the ship name region is realized, and the ship name identification efficiency is improved. Of course, not all of the above advantages need be achieved in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a first flowchart of a ship name identification method according to an embodiment of the present application;
fig. 2 is a second flowchart of a ship name identification method according to an embodiment of the present application;
fig. 3 is a third flowchart illustrating a ship name identification method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a ship name recognition device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In order to solve the problem of low ship name identification efficiency in the prior art, embodiments of the present application provide a ship name identification method, apparatus, electronic device, storage medium, and computer program product containing instructions. Next, a ship name recognition method provided in the embodiment of the present application will be described first. The method is applied to electronic equipment, and particularly can be any electronic equipment which can provide ship name identification service, such as a personal computer, a server and the like. The ship name identification method provided by the embodiment of the application can be realized by at least one of software, hardware circuit and logic circuit arranged in the electronic equipment.
As shown in fig. 1, fig. 1 is a first flowchart schematic diagram of a ship name identification method provided in an embodiment of the present application, and may include:
s110, determining a ship name area in the image containing the ship;
s120, extracting the image characteristics of the sub-area where the text unit is located in the ship name area, wherein the text unit is as follows: a character row or a character column;
s130, under the condition that at least two text units exist in the ship name area, splicing the image characteristics of each sub-area according to the arrangement sequence of each text unit in the ship name area to obtain target characteristics; wherein, the arrangement sequence of each text unit in the ship name area is determined according to the position information of each text unit;
and S140, performing text recognition based on the target characteristics to obtain a ship name recognition result.
In the scheme provided by the embodiment of the application, in the process of identifying the ship name, the target characteristics obtained by splicing the image characteristics of each sub-region are the characteristics of the whole ship name region, so that the ship name identification result obtained by text identification based on the target characteristics is the identification result of the whole ship name, the whole identification of the ship name region is realized, and the ship name identification efficiency is improved. In addition, in the prior art, each line of text needs to be recognized respectively, and finally, the recognition results of each line of text are spliced to obtain a final ship name recognition result, when there are multiple lines of text, whether the recognition result of each line of text belongs to the same ship name needs to be judged when the recognition results of each line of text are spliced, because the recognition results are influenced by the image resolution, the size of the ship name and the inclination degree of the ship name, therefore, the recognition results of each line of text need to be processed according to complex splicing logic finally, and when the splicing logic is not accurate, the ship name recognition accuracy is influenced.
In S110, the image including the ship may be captured by the capturing device, and the captured image may be transmitted to the electronic device by the capturing device. For example, in the case where the electronic device is a server, the ship name recognition may be completed in cooperation with the thermal imaging device and the imaging apparatus. In this case, when the ship runs in the airway slot, the thermal imaging device generates a thermal image, ship identification is performed based on the thermal image, when the ship is identified, the thermal imaging device generates a photographing instruction, the photographing instruction is sent to the photographing device directly or through the server, and the photographing device photographs the image. The ship is recognized to exist in the channel gate, so that the image shot by the shooting device comprises the ship, the shooting device sends the shot image to the server, and the server determines the ship name area in the image after receiving the image. In another case, after the thermal imaging device generates the thermal image, the thermal image can be directly sent to the server, the server identifies whether a ship exists in the channel gate, and then the shooting device is controlled to start shooting the image when the ship is identified. The present embodiment is described above as an example, and is not intended to limit the present invention.
Since the image including the ship includes not only the ship name region but also other non-ship name regions not including the ship name text, it is necessary to determine the ship name region in the image including the ship after obtaining the image including the ship in order to reduce the amount of calculation, reduce unnecessary information, and shorten the time for identifying the ship name. Determining the ship name area in the image containing the ship can be realized by adopting an artificial intelligence mode. Specifically, the image including the ship is input into a pre-trained ship name area detection model, and the image including the ship is detected by the ship name area detection model to obtain a ship name area. The ship name region detection model is a model with a ship name region detection function trained in advance based on the sample image, and the ship name region detection model may be a model based on machine learning, for example, a model based on deep learning. The specific training process may implement model training in a traditional back propagation manner, which is not described herein again.
The names of the ships are described by various types of characters, such as numbers, english letters, chinese characters, and the like, which are arranged in a row direction or a column direction, because the names of the ships can be considered to be represented by character rows or character columns. Where the rows and columns of characters may be collectively referred to as a text unit.
In S120, the text unit is a character line or a character column, and the image feature of the sub-area where the text unit is located in the ship name area is extracted.
The arrangement sequence of the text units in the ship name area can be determined according to the position information of the text units. Specifically, when determining the ship name area in the image of the ship, the arrangement order of the text units in the ship name area may be determined. For example, the ship name region is obtained by inputting an image including a ship into a ship name region detection model trained in advance, and performing detection processing on the image including the ship by using the ship name region detection model. For example, when the ship name region detection model is trained, a sequence label is added to a sample image used for training the ship name region detection model, and specifically, when a plurality of text units are included in the same ship name region, a sequence label may be added to each text unit according to the position of each text unit in the ship name region. Training a ship name area detection model based on the sequence label sample image, so that the trained ship name area detection model can determine the arrangement sequence of each text unit in the ship name area while obtaining the ship name area.
Of course, the order of arrangement of the text units in the ship name area may be determined according to the width of the ship name area and the height of the ship name area.
In an implementation manner of the embodiment of the present application, the position information includes a height and/or a width of the text unit in the ship name area; the step of determining the order of arrangement of the text units in the ship name area includes:
acquiring the ratio of the width of the ship name area to the height of the ship name area;
according to the ratio of the width of the ship name area to the height of the ship name area, and the corresponding relation between the preset ratio of the height of the ship name area to the width of the ship name area and the number of text units in the ship name area; determining the number of text units in the ship name area;
determining the number of lines or columns of each text unit according to the number of the text units in the ship name area and the position of each text unit in the ship name area;
and determining the arrangement sequence of each text unit in the ship name area according to the number of the lines or the columns where each text unit is located.
Since the text in the name area is generally set according to the width and height specified by the standard, when the name area is obtained, the ratio of the width of the name area to the height of the name area can reflect the number of rows or columns of text units. Therefore, the ratio of the width of the ship name area to the height of the ship name area can be calculated, and the number of rows or columns where the text units are located in the ship name area is determined according to the corresponding relationship between the calculated ratio, the preset ratio of the height of the ship name area to the width of the ship name area and the number of rows or columns of the ship name area.
Furthermore, the corresponding relation between the ratio of the height of the preset ship name area to the width of the ship name area and the number of rows or columns of the ship name area is stored in a preset database, and after the ratio of the width of the ship name area to the height of the ship name area is obtained through calculation, the corresponding relation between the ratio of the height of the preset ship name area to the width of the ship name area and the number of rows or columns of the ship name area is obtained from the preset database, so that the number of rows or columns of the ship name area is determined.
For example, if the text unit is a character row, after the sample ship image is collected, it may be determined that the ship name region includes 1 character row when the ratio of the width of the ship name region to the height of the ship name region is between 0.7 and 1.1, and that the ship name region includes 2 character rows when the ratio of the width of the ship name region to the height of the ship name region is between 0.4 and 0.7, where the correspondence between the preset ratio of the width of the ship name region to the height of the ship name region and the number of rows or columns of the ship name region is set according to the sample ship image, and the specific correspondence is not limited herein. If the text unit is a character line, the smaller the ratio of the width of the ship name area to the height of the ship name area is, the more character lines are included in the ship name area. Similarly, if the text unit is a character column, the larger the ratio of the width of the ship name area to the height of the ship name area is, the more character columns included in the ship name area are indicated.
In an implementation manner of the embodiment of the present application, a text unit is: and a character line for specifying the number of text units in the ship name area, and then specifying the arrangement order of the text units in the ship name area according to the upper and lower positions of the text units in the ship name area.
When the text unit is a character line, after the number of the text unit is determined, for example, 2 character lines are included, and the text unit 1 is on the upper side of the text unit 2, then in the splicing, the text unit 1 is in front of the text unit 2, that is, the text unit 2 is spliced behind the text unit 1.
In an implementation manner of the embodiment of the present application, a text unit is: the character lines are used for determining the number of the text units in the ship name area, and then determining the number of lines or columns of the text units according to the upper and lower positions of the text units in the ship name area;
and determining the arrangement sequence of each text unit in the ship name area according to the number of the lines or the columns where each text unit is located.
The height in the ship name area is 20 pixels, wherein the pixel values are calibrated from top to bottom, the position of the text unit 1 in the ship name area is between 0 and 9 pixels, the text unit 1 is in the first line, the position of the text unit 2 is between 11 and 20 pixels, and the text unit 1 is in the second line.
In an implementation manner of the embodiment of the present application, if the text unit is a character line, determining the number of the text units in the ship name area includes: and determining the number of text units in the ship name area based on the height of the ship name area and the preset character height.
For example, if the height of the ship name area is 200 pixels and the preset character height is 100 pixels, it is determined that the ship name area includes 2 character lines, that is, 2 lines of text units, thereby determining the number of lines of text units.
In an implementation manner of the embodiment of the present application, if the text unit is a character string, determining the number of the text units in the ship name area includes: and determining the number of text units in the ship name area based on the width of the ship name area and the width of the preset character.
For example, if the width of the ship name area is 200 pixels and the width of the preset character is 100 pixels, it may be determined that the ship name area includes 2 character columns, that is, 2 columns of text units, thereby determining the number of columns of text units.
In an embodiment of the application, after the ship name area is determined, the ship name area can be input into a ship name recognition model trained in advance to perform ship name recognition. And carrying out ship name recognition on the ship name area through a pre-trained ship name recognition model, and finally obtaining a ship name recognition result. The ship name recognition model is a model with a ship name recognition function obtained based on sample image pre-training, the ship name recognition model can be a model based on machine learning, for example, the model can be a model based on deep learning, the specific training process can adopt a traditional back propagation mode to realize model training, and details are not repeated here.
In some cases, the ship name recognition model has a size requirement on the input image, and the ship name region needs to be preprocessed first after the ship name region is acquired, and specifically, the ship name region can be scaled according to the size requirement of the ship name recognition model on the input image, so that the scaled ship name region meets the size requirement of the ship name recognition model on the input image. For example, the recognition model requires that the size of the input image is 500 pixels × 300 pixels, and the size of the ship name region is 1000 pixels × 300 pixels, and in order to meet the size requirement of the ship name recognition model on the input image, it is necessary to first perform scaling on the ship name region in equal proportion, the scaled ship name region is 500 pixels × 150 pixels, and then expand the black edge in the column direction on the scaled ship name region, so as to obtain a preprocessed ship name region with a black edge and a size of 500 pixels × 300 pixels.
Further alternatively, the identification model requires that the size of the input image is 1000 pixels × 300 pixels, and the size of the ship name region is 250 pixels × 150 pixels, and in order to meet the size requirement of the ship name identification model on the input image, the ship name region needs to be scaled in an equal proportion, the scaled ship name region is 500 pixels × 300 pixels, and then the scaled ship name region is extended with black edges in the row direction, so that the preprocessed ship name region with black edges and the size of 1000 pixels × 300 pixels is obtained.
Of course, the number of the sub-regions in the middle ship name region may be determined together with the ship name region by inputting the image including the ship into the ship name region detection model trained in advance, performing the ship name region on the image including the ship by the ship name region detection model trained in advance, and determining the number of the sub-regions. Therefore, the ship name area can be processed according to the number of the sub-areas, and the sub-area where the text unit is located is determined.
For example, the recognition model requires that the size of the input image is 1000 pixels by 300 pixels, and the default input of the image of the recognition model is that the first 150 pixels are the first row and the second 150 pixels are the second row from top to bottom; when the ship name area is determined to comprise 2 lines of texts, the size of the ship name area is 250 pixels × 150 pixels, the ship name area is directly scaled in an equal proportion, the scaled ship name area is 500 pixels × 300 pixels, and then black edges are expanded in the line direction of the scaled ship name area, so that the preprocessed ship name area with the black edges and the size of 1000 pixels × 300 pixels is obtained. And inputting the preprocessed ship name area into a recognition model for ship name recognition, so that the line number of the text unit of the first 150 pixels in the preprocessed ship name area is determined as a first line, and the line number of the text unit of the second 150 pixels in the preprocessed ship name area is determined as a second line from top to bottom. Or when the ship name area is determined to comprise 1 line of text, the size of the ship name area is 250 pixels × 150 pixels, black edges are expanded in the column direction of the ship name area, a first preprocessed ship name area with black edges and the size of 250 pixels × 300 pixels is obtained, and then the black edges are expanded in the row direction of the first preprocessed ship name area, and a second preprocessed ship name area with black edges and the size of 1000 pixels × 300 pixels is obtained. And inputting the second preprocessed ship name region into the recognition model for ship name recognition, so that the first 150 pixels in the second preprocessed ship name region can be determined as the region where the text unit is located according to the sequence from top to bottom.
Furthermore, the height occupied by each line of text identified by the identification model has no requirement, and the sub-region where each text unit is located can be calculated according to the number of the determined sub-regions and the size of the name region in the input identification model.
For example, when it is determined that the ship name region includes 2 lines of text, the recognition model requires that the size of the input image is 1000 pixels × 300 pixels, and the size of the ship name region is 250 pixels × 150 pixels, the ship name region is directly scaled in an equal proportion, the scaled ship name region is 500 pixels × 300 pixels, and then the scaled ship name region is extended with black edges in a line direction, so that the preprocessed ship name region with the black edges and the size of 1000 pixels × 300 pixels is obtained. According to the determination that the ship name area comprises 2 lines of texts and the height of the preprocessed ship name area is 300 pixels, the size of the height area occupied by each line of texts in the preprocessed ship name area can be determined to be 150 pixels, and the line number of the text unit in the preprocessed ship name area can be determined.
For example, when it is determined that the ship name region includes 3 lines of text, the recognition model requires that the size of the input image is 1000 pixels × 300 pixels, and the size of the ship name region is 250 pixels × 150 pixels, the ship name region is directly scaled in an equal proportion, the scaled ship name region is 500 pixels × 300 pixels, and then the scaled ship name region is extended with black edges in a line direction, so that the preprocessed ship name region with black edges and the size of 1000 pixels × 300 pixels is obtained. According to the determination that the ship name area comprises 3 lines of texts and the height of the preprocessed ship name area is 300 pixels, the size of the height area occupied by each line of texts in the preprocessed ship name area can be determined to be 100 pixels, and the line number of the text unit in the preprocessed ship name area can be determined.
For example, when it is determined that the ship name region includes 1 line of text, the recognition model requires that the size of the input image is 1000 pixels × 300 pixels, and the size of the ship name region is 250 pixels × 150 pixels, the ship name region is directly scaled in an equal proportion, the scaled ship name region is 500 pixels × 300 pixels, and then the scaled ship name region is extended with black edges in a line direction, so that the preprocessed ship name region with the black edges and the size of 1000 pixels × 300 pixels is obtained. According to the fact that the ship name area comprises 1 line of text, the line number of the text unit in the preprocessed ship name area can be determined.
And the ship name recognition model extracts the image features of the sub-regions where the text units are located in the preprocessed ship name regions to obtain the image features of the sub-regions. And under the condition that at least two text units exist in the ship name area, splicing the image features of the sub-areas according to the arrangement sequence of the text units in the ship name area to obtain the target feature.
The ship name recognition model can comprise an input layer, a convolution layer, a pooling layer, a splicing layer and an output layer, and image features of all sub-regions are spliced through the splicing layer according to the arrangement sequence of all text units in a ship name region to obtain target features, wherein the implementation mode of each network layer of the ship name recognition model is not limited herein.
In an implementation manner of the embodiment of the present application, if the text unit is a character line, the image features of each sub-region are spliced according to the arrangement sequence of each text unit in the ship name region to obtain the target feature, where the method includes:
and sequentially splicing the image features of the sub-regions according to the arrangement sequence of the text units from top to bottom to obtain the target features.
If the text unit is a character line, referring to fig. 2, fig. 2 is a second flowchart of the ship name recognition method according to the embodiment of the present application.
S210, determining a ship name area in the image containing the ship;
s220, extracting the image characteristics of the sub-area where the character lines are located in the ship name area to obtain the image characteristics of the sub-area where the character lines are located;
s230, splicing the image characteristics of each subarea according to the arrangement sequence of each character line in the ship name area to obtain target characteristics;
and S240, performing text recognition based on the target characteristics to obtain a ship name recognition result.
In an implementation manner of the embodiment of the present application, if the text unit is a character string, the image features of each sub-region are spliced according to the arrangement sequence of each text unit in the ship name region to obtain the target feature, where the method includes:
and sequentially splicing the image characteristics of each subarea according to the arrangement sequence of each text unit from left to right to obtain the target characteristics.
In an implementation manner of the embodiment of the present application, if the text unit is a character string, the image features of each sub-region are spliced according to the arrangement sequence of each text unit in the ship name region to obtain the target feature, where the method includes:
and sequentially splicing the image features of the sub-regions according to the arrangement sequence of the text units from right to left to obtain the target features.
And after the target features are obtained, performing text recognition based on the target features to obtain a ship name recognition result. Therefore, when at least two text units exist in the ship name area, the image features of the sub-areas can be spliced to obtain the target feature, the target feature obtained after the image features of the sub-areas are spliced is the feature of the whole ship name area, and therefore the ship name recognition result obtained by text recognition based on the target feature is the recognition result of the whole ship name, the whole recognition of the ship name area is achieved, and the ship name recognition efficiency is improved.
In an implementation manner of the embodiment of the present application, when the number of the images is greater than 1, performing text recognition based on the target feature to obtain a ship name recognition result includes:
step one, aiming at each image, performing text recognition based on target characteristics corresponding to the image to obtain a target ship name corresponding to the image and a confidence coefficient of the target ship name;
and step two, obtaining a ship name recognition result according to the confidence coefficient of each target ship name.
The image is taken between the detection of the bow by the radar and the detection of the stern by the radar.
Radar is a device that detects an object using electromagnetic waves. Specifically, the radar emits electromagnetic waves in the working process, because sea level and inland rivers are wide and rarely shielded, the electromagnetic waves emitted by the radar cannot be reflected or can be reflected after a long time, shielding can be formed after a ship drives into a channel bayonet, and the electromagnetic waves emitted by the radar are reflected. Based on the above situation, the radar can detect whether the ship enters the channel gate or not by means of the time for transmitting the electromagnetic wave and receiving the reflected wave of the electromagnetic wave.
In an embodiment of this application, when radar transmission electromagnetic wave and radar received the reflected wave, the timestamp of recording radar transmission electromagnetic wave respectively and the timestamp that the reflected wave was received to the radar, the radar is according to the timestamp of radar transmission electromagnetic wave, the timestamp of reflected wave and the speed of electromagnetic wave are received to the radar like this, can confirm boats and ships to radar's distance, distance rate of change (radial velocity), position, information such as height, then determine that what detected is the bow according to the aforesaid in the information of determining, still the stern.
For example, the radar is used for detecting a ship passing through the preset bayonet, and assuming that the distance from the preset bayonet to the radar is a first distance, when the radar calculates the distance between the radar and the shielding object to be the first distance according to the timestamp of transmitting the electromagnetic wave, the timestamp of receiving the reflected wave and the speed of the electromagnetic wave, it can be determined that the ship is in the preset bayonet. Based on this, it can be judged whether the radar detects the bow or the stern in the following manner.
Whether the ship bow is detected by the radar can be judged by the following modes: if the distance between the radar and the shielding object obtained by the previous radar calculation is not the first distance, the distance between the radar and the shielding object obtained by the current calculation is the first distance, it is indicated that a ship is in the preset bayonet, and the radar detects the bow of the ship. In addition, in order to accurately detect the bow, the distance between the radar and the shielding object can be calculated for the first distance for the first preset number of times in succession by the radar, and the fact that the bow is detected by the radar is judged. For example, the first predetermined number may be 3, 4, 5, etc.
Whether the radar detects the stern can be judged by the following modes: if the distance between the radar and the shielding object obtained by the previous radar calculation is the first distance, the distance between the radar and the shielding object obtained by the current calculation is not the first distance, it is indicated that no ship exists in the preset bayonet, and the radar detects the stern. In addition, in order to accurately detect the stern, the distance between the radar and the shielding object can be calculated for a second preset number of times in succession by the radar and is not the first distance, and then the detection of the stern by the radar is judged. For example, the second predetermined number may be 3, 4, 5, etc.
In addition to the above, in another embodiment of the present application, whether the radar detects the bow or the stern may be determined by: when the radar receives reflected waves, the radar generates a shooting instruction, the shooting instruction is directly sent to the shooting device or sent to the shooting device through the electronic equipment, the shooting device starts to shoot images after receiving the shooting instruction, the shooting device sends the images to the electronic equipment, the electronic equipment conducts ship part identification on the images, and whether the shot images are bow or stern is determined. The ship part identification of the image can be realized by adopting an artificial intelligence mode. Specifically, the image including the ship is input into a ship part recognition model trained in advance, and the ship part recognition is performed on the image including the ship through the ship part recognition model to obtain the information of the ship part. The vessel part recognition model is a model with a vessel part recognition function trained in advance based on the sample image, and the vessel part recognition model may be a model based on machine learning, for example, a model based on deep learning. The specific training process may implement model training in a traditional back propagation manner, which is not described herein again.
In summary, in an implementation manner of the embodiment of the present application, when the radar detects the bow of a ship, a shooting start instruction is generated, the shooting start instruction is sent to the shooting device directly or through the electronic device, and the shooting device receives the shooting start instruction, shoots an image, and sends the shot image to the electronic device, where the shooting device can shoot the image according to a preset shooting frequency, for example, shoot one image every 2 seconds; when the radar detects the stern, a shooting stopping instruction is generated, the shooting stopping instruction is directly sent to the shooting device or sent to the shooting device through the electronic equipment, and the shooting device stops shooting images after receiving the shooting stopping instruction. The image shot by the shooting device between the ship head detected by the radar and the ship tail detected by the radar is the image containing the ship.
In an implementation manner of the embodiment of the application, when the photographing device photographs an image, a timestamp of the photographing device can be recorded, the electronic device acquires the image photographed by the photographing device in real time and acquires a timestamp corresponding to the image sent by the photographing device, and the image may be a single image or multiple images. When the radar detects the bow, the radar sends a timestamp of the detected bow to the electronic equipment, and when the radar detects the stern, the radar sends a timestamp of the detected stern to the electronic equipment; the electronic equipment detects the image shot between the timestamp that the ship head is detected to the radar and detects the timestamp of stern according to the timestamp that the image corresponds, the timestamp that the radar detected the bow and the timestamp that the radar detected the stern, as the image that contains the boats and ships, and electronic equipment carries out the name of a ship discernment to the image that contains the boats and ships.
In an implementation manner of the embodiment of the present application, the image may be captured between two adjacent detections of the stern by the radar.
For example, when the image capturing device captures an image, a timestamp of the captured image may be recorded, and the electronic device may acquire the image captured by the image capturing device in real time and acquire the timestamp corresponding to the image transmitted by the electronic device. When the radar detects the stern, the radar sends a timestamp of the detected stern to the electronic equipment; and the electronic equipment takes the image shot between the timestamp of the last stern detected by the radar and the timestamp of the current stern detected by the radar as the image containing the ship according to the timestamp corresponding to the image, the timestamp of the last stern detected by the radar and the timestamp of the current stern detected by the radar, and carries out the name identification on the image containing the ship.
When the number of the images is greater than 1, for each image, text recognition needs to be performed based on the target features corresponding to the image, so that a target ship name corresponding to the image and a confidence coefficient of the target ship name are obtained, wherein the confidence coefficient represents the probability that the ship name corresponding to the image is the target ship name, and then the ship name recognition results of the images can be integrated to obtain a final ship name.
In an implementation manner of the embodiment of the present application, the obtaining a ship name recognition result according to the confidence of each target ship name includes:
and determining the target ship name with the highest confidence coefficient as the ship name contained in the ship name recognition result.
For example, a total of 4 images, image 1, image 2, image 3, and image 4, are included. The ship name recognition is carried out on the ship name area in the image 1, the ship name is A, and the confidence coefficient of A is 0.5; identifying the ship name in the ship name area in the image 2 to obtain the ship name B, wherein the confidence coefficient of the ship name B is 0.3; identifying the ship name in the ship name area in the image 3 to obtain the ship name C, wherein the confidence coefficient of the ship name C is 0.7; identifying the ship name in the ship name area in the image 4 to obtain the ship name B, wherein the confidence coefficient of the ship name B is 0.5, and determining the target ship name with the highest confidence coefficient as the ship name contained in the ship name identification result according to the confidence coefficient of each target ship name; namely, the confidence of C is 0.7 at most, C is the name of the ship in the ship name recognition result.
In an implementation manner of the embodiment of the present application, the obtaining a ship name recognition result according to the confidence of each target ship name includes:
and determining the same ship name in the target ship names corresponding to the images, performing weighted calculation on the confidence coefficient of the same ship name aiming at each same ship name to obtain the weighted confidence coefficient of the same ship name, and determining the same ship name with the highest weighted confidence coefficient as the ship name contained in the ship name recognition result.
In addition to the information related to the images 1 to 4 listed in the above example, since the ship name obtained by ship name recognition of the ship name region in the image 2 is B, the confidence of B is 0.3, the ship name obtained by ship name recognition of the ship name region in the image 4 is B, and the confidence of B is 0.5, the confidence of B obtained by ship name recognition of the ship name region in the image 2 is 0.3, and the confidence of B obtained by ship name recognition of the ship name region in the image 4 is 0.5, so that the weighted confidence of the ship name B is 0.8, the same ship names do not exist for other ship names, and the weighted confidence is the confidence included in the ship name recognition result, and since the weighted confidence of B is the highest, the ship name in the ship name recognition result is B. By integrating the ship name recognition results of the multiple images, the accuracy of ship name recognition can be improved.
In an implementation manner of the embodiment of the application, the ship name recognition result includes an image corresponding to a ship name, where the image corresponding to the ship name is an image obtained by performing ship name recognition.
For example, on the basis of the information about the images 1 to 4 listed in the above example, the confidence of C is at most 0.7, and C is determined as the name of the ship in the ship name recognition result, and the image 3 corresponding to C is included in the ship name recognition result. Or the weighted confidence coefficient of the B is the highest, the B is determined as the ship name in the ship name recognition result, and the ship name recognition result comprises the image 2 and the image 4 corresponding to the B. In addition, since the confidence 0.3 of B obtained by performing the ship name recognition on the ship name region in the image 2 is smaller than the confidence 0.5 of B obtained by performing the ship name recognition on the ship name region in the image 4, only the image 4 with the highest confidence corresponding to B may be included in the ship name recognition result.
When the ship name recognition result comprises the image corresponding to the ship name, the image can be displayed to the user so that the user can perform subsequent manual review, and therefore the accuracy of ship name recognition can be improved.
In an application scenario of ship name recognition, the shooting device may be a web camera, the electronic device is a server, and the web camera, the radar and the server cooperate to complete the ship name recognition. In this case, the ship is detected by radar, the network camera is used for shooting images, and the network camera transmits the shot images to the server so that the server performs the name recognition of the ship in the images.
In one implementation of the embodiment of the present application, determining a ship name region in an image including a ship includes:
inputting the image into a pre-trained ship name area detection model aiming at each image containing the ship, and detecting the image through the pre-trained ship name area detection model to obtain a ship name area; the pre-trained ship name area detection model is obtained by training based on a preset training set, and the preset training set comprises a sample image containing a ship; and adding a ship name area label to an area containing the ship name in each sample image, wherein the area comprises at least one text unit.
The ship name area label is used for representing the position of the ship name area in the sample image.
Specifically, a preset training set is obtained, wherein the preset training set comprises a sample image containing a ship;
for each sample image, if the sample image has an area containing a ship name, adding a ship name area label to the area, wherein the area comprises at least one text unit;
for example, there is a sample image, the sample image includes a license plate of a ship, the license plate records a ship name of the ship, the license plate is a license plate including 2 lines of characters, a ship name region label is added to a position of the 2 lines of characters in the sample image as a whole, for example, a rectangular region of the 2 lines of characters in the sample image is determined, a position of the rectangular region in the sample image is used as the ship name region label, and a ship name region detection model is trained based on the sample image and the ship name region label, so that when the trained ship name region detection model detects an image including a ship, a region including a plurality of text units and including the ship name can be detected as a whole ship name region, thereby detecting the ship name region in the image. This allows the ship name areas to be identified as a whole without requiring individual identification for individual text units when identifying multiple ship name areas.
For example, in an implementation manner of the embodiment of the present application, the trained ship name region detection model may further classify the detected ship name regions, so as to distinguish the positions of the detected ship name regions on the ship, specifically, when the sample image is acquired, a ship name region category label is added to the region to which the ship name region label is added according to the structure of the ship, for example, the ship name region category label includes "bow name", "stern name", and the like, that is, if the bow of the ship contains the ship name, the region to which the bow contains the ship name is added as "bow name". The ship name area detection model is trained based on the sample image, so that the trained ship name area detection model can identify the ship name area and output a label corresponding to the ship name area, and when the trained ship name area detection model detects the image including the ship, the ship name area of the image is output, and the ship name area label of the ship name area is output. The ship name area represented by the ship name area label is located at the position of the ship, and the position of the ship name area in the ship can be judged. Furthermore, whether the ship passes through the channel gate can be judged according to the position of the ship in the ship name area.
Referring to fig. 3, fig. 3 is a third flowchart illustrating a ship name identification method according to an embodiment of the present application.
S310, the server continuously obtains images shot by the network camera after the radar detects the bow;
s320, detecting a ship name area in the obtained image;
s330, identifying the ship name in the ship name area;
s340, judging whether the radar detects the stern, if not, executing S320, and if so, executing S350;
s350, integrating ship name recognition results of the multiple images;
and S360, outputting a ship name comprehensive result and providing an image corresponding to the ship name in the corresponding result.
When the ship head is detected by the radar, the network camera shoots images of the ship every 2S and sends the shot images to the server, the server detects a ship name area in the images after receiving the images, and after the ship name area in the images is determined, the ship name in the ship name area is identified, wherein the step of identifying the ship name area can be executed according to the method steps from S120 to S140. Judging whether the radar detects the stern, if not, continuing to detect the images shot by the network camera, then identifying the ship names in the ship name area according to the step S330, integrating the ship name identification results of the images shot from the ship bow detected by the radar to the stern detected by the radar when the stern is detected, outputting the ship name integration result according to the ship name identification results, and providing the image corresponding to the ship name in the corresponding result. And the stern is detected by a radar, so that images between every two ships are distinguished.
Specifically, for each image, text recognition is performed based on the target features corresponding to the image to obtain the target ship name corresponding to the image and the confidence coefficient of the target ship name, the target ship name with the highest confidence coefficient is determined as the ship name included in the ship name recognition result according to the confidence coefficient of each target ship name, and the ship name recognition accuracy can be improved by integrating the ship name recognition results of a plurality of images.
The embodiment of the present application provides a ship name recognition apparatus, referring to fig. 4, where fig. 4 is a schematic structural diagram of the ship name recognition apparatus provided in the embodiment of the present application, and the ship name recognition apparatus includes: a determination module 410, an extraction module 420, a concatenation module 430, and an identification module 440.
A determining module 410 for determining a ship name region in an image containing a ship;
an extracting module 420, configured to extract an image feature of a sub-region where a text unit is located in the ship name region, where the text unit is: a character row or a character column;
a splicing module 430, configured to splice image features of each sub-region according to an arrangement sequence of each text unit in the ship name region to obtain a target feature, when at least two text units exist in the ship name region; wherein, the arrangement sequence of each text unit in the ship name area is determined according to the position information of each text unit;
and the identification module 440 is configured to perform text identification based on the target features to obtain a ship name identification result.
In a possible embodiment, the position information includes a height and/or a width of the text unit in the ship name area;
the splicing module 430 is specifically configured to:
acquiring the ratio of the width of the ship name area to the height of the ship name area;
according to the ratio of the width of the ship name area to the height of the ship name area, and the corresponding relation between the preset ratio of the height of the ship name area to the width of the ship name area and the number of text units in the ship name area; determining the number of text units in the ship name area;
determining the number of lines or columns of each text unit according to the number of the text units in the ship name area and the position of each text unit in the ship name area;
and determining the arrangement sequence of each text unit in the ship name area according to the number of the lines or the columns where each text unit is located.
In a possible embodiment, when the number of the images is greater than 1, the identifying module 440 includes:
the confidence coefficient submodule is used for carrying out text recognition on the basis of the target features corresponding to each image so as to obtain the target ship name corresponding to the image and the confidence coefficient of the target ship name; the image is shot between the detection of the bow of the ship by the radar and the detection of the stern of the ship by the radar;
and the processing submodule is used for obtaining a ship name recognition result according to the confidence coefficient of each target ship name.
In a possible embodiment, the processing submodule is specifically configured to:
determining the target ship name with the highest confidence coefficient as the ship name contained in the ship name recognition result;
or
And determining the same ship name in the target ship names corresponding to the images, performing weighted calculation on the confidence coefficient of the same ship name aiming at each same ship name to obtain the weighted confidence coefficient of the same ship name, and determining the same ship name with the highest weighted confidence coefficient as the ship name contained in the ship name recognition result.
In a possible embodiment, the determining module 410 is specifically configured to:
inputting the image into a pre-trained ship name area detection model aiming at each image containing the ship, and detecting the image through the pre-trained ship name area detection model to obtain a ship name area; the pre-trained ship name area detection model is obtained by training based on a preset training set, and the preset training set comprises a sample image containing a ship; and adding a ship name area label to an area containing the ship name in each sample image, wherein the area comprises at least one text unit.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
An embodiment of the present application further provides an electronic device, referring to fig. 5, where fig. 5 is a schematic structural diagram of the electronic device provided in the embodiment of the present application, and the electronic device includes: a processor 510, a communication interface 520, a memory 530, and a communication bus 540, wherein the processor 510, the communication interface 520, and the memory 530 communicate with each other via the communication bus 540.
The memory 530 for storing a computer program;
the processor 510 is configured to implement the following steps when executing the computer program stored in the memory 530:
determining a ship name area in an image containing a ship;
extracting the image characteristics of the sub-area where the text unit is located in the ship name area, wherein the text unit is as follows: a character row or a character column;
under the condition that at least two text units exist in the ship name area, splicing the image features of each sub-area according to the arrangement sequence of each text unit in the ship name area to obtain a target feature; the arrangement sequence of each text unit in the ship name area is determined according to the position information of each text unit;
and performing text recognition based on the target characteristics to obtain a ship name recognition result.
Optionally, the processor 510, when being configured to execute the program stored in the memory 530, may further implement any of the above-described ship name recognition methods.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In an embodiment of the present application, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of identifying any of the ship names.
In an embodiment of the present application, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the above-described ship name identification methods in the above-described embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The processes or functions described above in accordance with the embodiments of the present application occur wholly or in part upon loading and execution of the above-described computer program instructions on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It should be noted that, in this document, the technical features in the various alternatives can be combined to form the scheme as long as the technical features are not contradictory, and the scheme is within the scope of the disclosure of the present application. Relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the same element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for embodiments of the apparatus, the electronic device, the computer-readable storage medium, and the computer program product comprising instructions, which are substantially similar to the method embodiments, the description is relatively simple, and reference may be made to some descriptions of the method embodiments for relevant points.
The above description is only a preferred embodiment of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application are included in the protection scope of the present application.

Claims (10)

1. A method for identifying a ship name, the method comprising:
determining a ship name area in an image containing a ship;
extracting image characteristics of a sub-region where a text unit is located in the ship name region, wherein the text unit is as follows: a character row or a character column;
under the condition that at least two text units exist in the ship name area, splicing the image features of each sub-area according to the arrangement sequence of each text unit in the ship name area to obtain a target feature; the arrangement sequence of each text unit in the ship name area is determined according to the position information of each text unit;
and performing text recognition based on the target characteristics to obtain a ship name recognition result.
2. The method of claim 1, wherein the location information comprises a height and/or a width of the text unit in the ship name area; the step of determining the arrangement sequence of the text units in the ship name area comprises the following steps:
acquiring the ratio of the width of the ship name area to the height of the ship name area;
according to the ratio of the width of the ship name area to the height of the ship name area, and the corresponding relation between the preset ratio of the height of the ship name area to the width of the ship name area and the number of text units in the ship name area; determining the number of text units in the ship name area;
determining the number of lines or columns of each text unit according to the number of the text units in the ship name area and the position of each text unit in the ship name area;
and determining the arrangement sequence of each text unit in the ship name area according to the number of the lines or the columns where each text unit is located.
3. The method of claim 1, wherein when the number of the images is greater than 1, the performing text recognition based on the target feature to obtain a ship name recognition result comprises:
for each image, performing text recognition based on the target features corresponding to the image to obtain a target ship name corresponding to the image and a confidence coefficient of the target ship name; the image is taken between the bow of the radar detection to the stern of the radar detection;
and obtaining a ship name recognition result according to the confidence coefficient of each target ship name.
4. The method of claim 3, wherein obtaining the ship name recognition result according to the confidence of each target ship name comprises:
determining the target ship name with the highest confidence coefficient as the ship name contained in the ship name recognition result;
or
And determining the same ship name in the target ship names corresponding to the images, performing weighted calculation on the confidence coefficient of the same ship name aiming at each same ship name to obtain the weighted confidence coefficient of the same ship name, and determining the same ship name with the highest weighted confidence coefficient as the ship name contained in the ship name recognition result.
5. The method of claim 1, wherein determining a ship name region in the image containing the ship comprises:
inputting each image containing the ship into a pre-trained ship name area detection model, and detecting the image through the pre-trained ship name area detection model to obtain a ship name area; the pre-trained ship name area detection model is obtained by training based on a preset training set, and the preset training set comprises a sample image containing a ship; and adding a ship name area label to an area containing the ship name in each sample image, wherein the area comprises at least one text unit.
6. A ship name recognition apparatus, characterized in that the apparatus comprises:
a determining module for determining a ship name area in an image containing a ship;
the extraction module is used for extracting the image characteristics of the sub-region where the text unit is located in the ship name region, wherein the text unit is as follows: a character row or a character column;
the splicing module is used for splicing the image characteristics of each sub-region according to the arrangement sequence of each text unit in the ship name region to obtain target characteristics under the condition that at least two text units exist in the ship name region; the arrangement sequence of each text unit in the ship name area is determined according to the position information of each text unit;
and the recognition module is used for performing text recognition based on the target characteristics to obtain a ship name recognition result.
7. The apparatus of claim 6, wherein the location information comprises a height and/or a width of the text unit in the ship name area;
the splicing module is specifically configured to:
acquiring the ratio of the width of the ship name area to the height of the ship name area;
according to the ratio of the width of the ship name area to the height of the ship name area, and the corresponding relation between the preset ratio of the height of the ship name area to the width of the ship name area and the number of text units in the ship name area; determining the number of text units in the ship name area;
determining the number of lines or columns of each text unit according to the number of the text units in the ship name area and the position of each text unit in the ship name area;
and determining the arrangement sequence of each text unit in the ship name area according to the number of the lines or the columns where each text unit is located.
8. The apparatus of claim 6, wherein when the number of images is greater than 1, the identifying module comprises:
the confidence coefficient submodule is used for carrying out text recognition on the basis of the target features corresponding to each image so as to obtain the target ship name corresponding to the image and the confidence coefficient of the target ship name; the image is taken between the bow of the radar detection to the stern of the radar detection;
and the processing submodule is used for obtaining a ship name recognition result according to the confidence coefficient of each target ship name.
9. An electronic device, comprising: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1-5 when executing a program stored on a memory.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of the claims 1-5.
CN202010961767.6A 2020-09-14 2020-09-14 Ship name recognition method and device, electronic equipment and computer readable storage medium Active CN112036396B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010961767.6A CN112036396B (en) 2020-09-14 2020-09-14 Ship name recognition method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010961767.6A CN112036396B (en) 2020-09-14 2020-09-14 Ship name recognition method and device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112036396A true CN112036396A (en) 2020-12-04
CN112036396B CN112036396B (en) 2022-09-02

Family

ID=73589164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010961767.6A Active CN112036396B (en) 2020-09-14 2020-09-14 Ship name recognition method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN112036396B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446621A (en) * 2018-03-14 2018-08-24 平安科技(深圳)有限公司 Bank slip recognition method, server and computer readable storage medium
CN109117848A (en) * 2018-09-07 2019-01-01 泰康保险集团股份有限公司 A kind of line of text character identifying method, device, medium and electronic equipment
CN110766002A (en) * 2019-10-08 2020-02-07 浙江大学 Ship name character region detection method based on deep learning
CN110796130A (en) * 2019-09-19 2020-02-14 北京迈格威科技有限公司 Method, device and computer storage medium for character recognition
CN111582182A (en) * 2020-05-11 2020-08-25 广州创亿源智能科技有限公司 Ship name identification method, system, computer equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108446621A (en) * 2018-03-14 2018-08-24 平安科技(深圳)有限公司 Bank slip recognition method, server and computer readable storage medium
CN109117848A (en) * 2018-09-07 2019-01-01 泰康保险集团股份有限公司 A kind of line of text character identifying method, device, medium and electronic equipment
CN110796130A (en) * 2019-09-19 2020-02-14 北京迈格威科技有限公司 Method, device and computer storage medium for character recognition
CN110766002A (en) * 2019-10-08 2020-02-07 浙江大学 Ship name character region detection method based on deep learning
CN111582182A (en) * 2020-05-11 2020-08-25 广州创亿源智能科技有限公司 Ship name identification method, system, computer equipment and storage medium

Also Published As

Publication number Publication date
CN112036396B (en) 2022-09-02

Similar Documents

Publication Publication Date Title
US11182592B2 (en) Target object recognition method and apparatus, storage medium, and electronic device
CN107239786B (en) Character recognition method and device
US8374454B2 (en) Detection of objects using range information
CN109670383B (en) Video shielding area selection method and device, electronic equipment and system
CN109389110B (en) Region determination method and device
CN111104813A (en) Two-dimensional code image key point detection method and device, electronic equipment and storage medium
CN111563439B (en) Aquatic organism disease detection method, device and equipment
CN112733666A (en) Method, equipment and storage medium for collecting difficult images and training models
CN111368698A (en) Subject recognition method, subject recognition device, electronic device, and medium
CN112036396B (en) Ship name recognition method and device, electronic equipment and computer readable storage medium
CN112560856B (en) License plate detection and identification method, device, equipment and storage medium
CN111860122B (en) Method and system for identifying reading comprehensive behaviors in real scene
CN114708582B (en) AI and RPA-based electric power data intelligent inspection method and device
CN113221718B (en) Formula identification method, device, storage medium and electronic equipment
CN111062377B (en) Question number detection method, system, storage medium and electronic equipment
CN114743048A (en) Method and device for detecting abnormal straw picture
CN110969602B (en) Image definition detection method and device
CN114445841A (en) Tax return form recognition method and device
CN111611986A (en) Focus text extraction and identification method and system based on finger interaction
CN113139629A (en) Font identification method and device, electronic equipment and storage medium
US20220392241A1 (en) Font detection method and system using artificial intelligence-trained neural network
CN114511867A (en) OCR (optical character recognition) method, device, equipment and medium for bank card
US20240037889A1 (en) Image processing device, image processing method, and program recording medium
US11710331B2 (en) Systems and methods for separating ligature characters in digitized document images
US20220237931A1 (en) Systems and methods for printed code inspection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant