Thank you for using LTF-Cimulator, a simulator of LTF-C (Local Transfer Function Classifier) neural network, which can be used for solving classification problems.
I designed LTF-C in 2000. Since then I have improved it a bit and tested in various real-world applications - it was performing very well! I hope LTF-C will satisfy your expectations, too.
You can find more information on LTF-C on my web page.
LTF-Cimulator has full functionality and is freely available for noncommercial purposes (licence). Using it you can:
I was trying to make this program as easy to use as possible. I included exemplary datasets, which should help you become more familiar with the program.
In order to conduct a typical experiment with LTF-Cimulator you should:
Just after the training the network is automatically saved to the temporary.net file and tested on the training and test set (like after pressing the Test button).
This version of LTF-Cimulator does not allow to stop the training. The program does not react on user's commands until the current task (training or testing) is finished.
In order to test the network saved in a .net file you should:
Test results are displayed in the Accuracy frame, separately for training and test set. The percentage of patterns classified correctly/incorrectly or correctly/unrecognized/incorrectly, together with the actual number of the patterns, is given. In the field Unrecognize threshold you can change the threshold delta, below which a pattern is classified as unrecognized.
During the test, the responses of output neurons and the whole network are automatically saved after every presentation in trainset.rsp and testset.rsp files, which allows further processing, e.g. in a spreadsheet.
In order to perform the training with cross-validation you should:
During cross-validation only the first dataset from the .dat file is used. This set is split in every cross-validation step into training and test part, according to the remainder in division by the number of steps.
After cross-validation a message box appears, which shows the mean of the number of hidden neurons in the networks trained. In the Accuracy frame, average results of all the networks are displayed.
Before starting the training you should set values of the training parameters (in the Training parameters frame), which influence the training process. You can do this twofold:
If the second way has been chosen, you must remember that the values should satisfy some general relations:
The choice of the parameter values requires pretty large experience. That is why I recommend that you rely on default values, at least at the beginning.
The parameter which can be often worth of modification is eta_u. Number of hidden neurons, fitness to training data and generalization ability of the network depend indirectly on this parameter. Generally speaking, greater eta_u results in a smaller network, with worse fitness to the data, but better generalization ability.
In very rare cases you may need to modify eta_gp, as well. Namely, if the training process is totally incorrect, i.e. it yields a very small network (with one or two hidden neurons), you should try to decrease eta_gp.
If you look for more precise information on the meaning of the parameters and for a method for choosing their values you are welcome to visit my web page, where you can find papers on LTF-C.
Data for training and testing must be saved in a .dat file. This is a text file, which comprises elements preceded by appropriate labels (which starts with the "@" character). The elements which are required:
Every dataset lies between labels "<DatasetX" (X is a number of the dataset; the numbering begins with 0) and ">". Between these labels the "NumSamples" label should lie, as well, followed by the number of patterns in this dataset. The "Data" label should directly precede the data. Every pattern comprising the data should be composed of a series of space-delimited real numbers (values of consecutive attributes) followed by an integer - a class number of the pattern. Numbering of classes begins with 0.
Attention: even if classification of patterns is unknown, some class numbers must be given, i.e. 0.
In order to obtain the best results you ought to remember that dispersion of values of the attributes should be neither very big nor very small. Most often, it is enough to divide every attribute by its standard deviation or by the difference between its maximum and minimum. In the case when the dispersion of attribute values carries significant information, i.e. when attributes with bigger dispersion are more important, then the normalization should be done for all the attributes together, not for each one separately.
A trained network can be saved to a .net file with Save As... in the File menu. This file has very similar format as .dat files. Apart from the network itself, this file contains also training parameters used to train this network.
It should be mentioned that the .net file does not actually contain neuron radii, but their reciprocals. Moreover, this file contains some undocumented elements, which you can, however, ignore. In order to use a network in a different program, you need only the information preceded by labels: Inputs, Outputs, Neurons in the file header and Class, Weights and RecipOfRadii in the description of each neuron.
Marcin Wojnarski
POLAND
mailto: mwojnars "at" ns.onet.pl
http://www.mimuw.edu.pl/~mwojnars