A Real-time License Plate Recognition (LPR) S...

来源:百度文库 编辑:神马文学网 时间:2024/04/27 10:59:23
LPR
PROJECTS AT VISL FINISHED IN 2002

A Real-time vehicle License Plate Recognition (LPR) System
by Bar-Hen Ron
Supervised byJohanan Erez
A typical schema for the system:

Abstract
The purpose of this project was to build a real time application which recognizes license plates from cars at a gate, for example at the entrance of a parking area. The system, based on regular PC with video camera, catches video frames which include a visible car license plate and processes them. Once a license plate is detected, its digits are recognized, displayed on the User Interface or checked against a database. The focus is on the design of algorithms used for extracting the license plate from a single image, isolating the characters of the plate and identifying the individual characters.
The background:
There have been similar past projects at the Lab. including projects which implemented the whole system. The purpose of this project was first and foremost to improve the accuracy of the program, and whenever possible its time-complexity. All the past projects at the Lab. had poor accuracy according to the tests we made on the set of 45 images we used in our program and were successful only when particular conditions were satisfied. For this reason, except from very rare cases the entire program was written again.
Brief description of the implementation:
Our license plate recognition system can be roughly broken down into the following block diagram.
Block diagram of the global system.
Alternatively this progression could be viewed as the reduction or suppression of unwanted information from the information carrying signal, here a video sequence containing vast amounts of irrelevant information, to abstract symbols in the form of the characters of a license place.
The Optical Character recognition (OCR) has been made using the Neural Network technique, using a feed-forward network with 3 layers, 200 neurons in the input layer, 20 neurons in the middle layer, and 10 neurons in the output layer. We kept the Neural Network dataset used in a precedent project which includes 238 digit images.
The detailed steps of our algorithm are described in the following diagram:

Block diagram of the program subsystems.
Here are described the outputs of the main steps described above on a given captured frame:
Example of a captured frame
Captured frame with yellow regions filtered
Captured frame with yellow regions dilated
License plate region
Determining the angle of the plate using the Radon transform
Improved LP region
Adjusting the LP Contours - Columns Sum Graph
Adjusting the LP Contours - Lines Sum Graph
LP Crop
Gray scale LP
LP binarization and Equalization using an adaptive threshold
Binary LP
Normalized LP
Determining the LP horizontal contours using
the sum of the lines of the precedent image
Normalized LP with contours adjusted
Character Segmentation using the peaks-to-valleys method
Dilated digit image
Adjusting digit images horizontal contours - Line sum graph
Contours adjusted digit image
Resized digit image
OCR digits recognition using the Neural Network method
Tools
The implementation of the program was developed on Matlab. A demo program on which the user can see all the steps of the different algorithms, set the level of details he wants to get, and the speed of the demo was also written as shown on Figure 29. The demo can be started, stopped, or paused. In its current version the demo includes 45 images on which the algorithm was successful. It is important to notice that the speed of the simulation does not reflect the real speed of the whole algorithm since “pause” commands has been inserted into the program, and the loading of images itself takes time.
The Demo Graphical User Interface:

A very useful freeware permitting to test most of the Matlab image processing integrated functions was downloaded from the Mathworks site and a link to its source appear at the end of this page.
The first picture in this page were taken from Hi-Tech Solutions™ site with their authorization.
Conclusions and future works
The first conclusion is that what is trivial for the human eye may appear a very difficult task for the computer, but still computer vision can be very powerful and permit to perform very useful operations as the one we implemented in this project.
The algorithms used in the program have been tested and proved to be accurate and efficient, but still there are cases when they fail. Following are the most important problems we noticed:
-   The most important problem is the Neural Network dataset size: if enlarged in future implementations, it will largely improve the accuracy of the algorithm.
-   The Candidate selection algorithm in the yellow regions filtered image sometimes fails, and the main improvement would be to refine the statistically fixed parameters used in this algorithm.
-   In general, all the statistically fixed parameters should be refined by performing more tests.
-   The yellow region extraction algorithm sometimes fail, and it would be a good idea in future implementation to join it to the supplementary algorithm which is based on the fact that the lines where the number plate is located in the image have a clear ""signature"" which corresponds to strong grey level variations at somehow ""regular"" intervals which makes it usually possible to distinguish them from other lines in the image, or at least to pre-select some positions where to look further.
-   Generally, the decision algorithms should be improved, and a way to detect error and to make decisions flow circular should be developed, for example, if there are multiple candidates for LP location that satisfies the criterions, testing each one of them according to predefined supplementary criterions, or, in cases of doubt when identifying the digits, that is when the probability of the best guess being correct is below some threshold, the system should refuse to make a decision.
Acknowledgment
We are grateful to our project supervisor, the Lab. chief engineer Johanan Erez, for his help and guidance throughout this work. We are also grateful to theOllendorf Minerva Center Fund for supporting this project.
Related documentationImagesFull documentation