Large processing time

The simulation model aims in finding the BER value for various SNR values and plots them on a chart. The processes involving in finding a BER value for a single SNR value are random generation of a large number of symbols, modulation of these symbols using one of the modulation schemes such as PSK, MIMO- STC scheme encoding, generating and applying a Rayleigh flat fading profile to it, generating and adding AWGN to it, decoding, demodulation and BER calculation. Each of these processes manipulates and handles the large number of symbols generated. The processing time involved is directly related to the number of symbols. All the simulation results discussed in this report have been generated by using 10,000 symbols. For generating accurate and faithful results to the real world environment the Fading and AWGN channels are programmed to change randomly; thus each time the model runs different BER value is obtained to. Hence to obtain an average BER value the model repeats the procedure a number of times and then takes the average of all the BER values. Thus the processing time increases directly by the number of times the user defines the ‘# of iterations per SNR’ control.

In addition to the above reason, the processing time also increases as the MIMO-STC coding scheme becomes more and more complex. For example, the Spatial Multiplexing scheme requires the most processing time for BER calculations. This is due to the fact that at the receiver it checks for which value the maximum likelihood function is minimum by permuting all the possible combinations. Thus in this case the modulation scheme determines a part processing time. Also, the number of permutations increases by a large number each time a communication channel is added, i.e. the number of transmitter or receiver antennas increases. Thus this again increases the processing time in an exponential manner.

Hardware’s Limitation

 The processor used for simulating the model is Intel’s Pentium D. The CPU comprises two dies, each containing a single core residing next to each other on a multi-chip module package. Despite having two dies, the LabVIEW software was not able to take advantage of it. The reason for this is due to the processors architecture. Pentium D has two processing cores, however a single application can never utilize the both the cores simultaneously. The other core comes to life only if one core is completely utilized by some other process. Hence by default LabVIEW doesn't create threads for two processors; it creates only for one (which has been observed and confirmed from Task Manager). So unless it is manually configured both the cores are not used. This manual configuration of the processors to work independently is shown in the next section by using the Timed Loop feature of LabVIEW. However this thing is possible only if two data independent VIs are created and assigned to each of the cores. It doesn’t solve problem for a single VI. Had it been a core to duo processor where an application is allowed to access both the cores, LabVIEW would automatically create threads for both the cores. But as mentioned in Pentium D LabVIEW is not aware of the second core by default, so it doesn’t create threads for the second core. Manual creation and management of two sets of threads for each core such that each set can be independently executed is beyond the scope of this project

Maximum BER resolution - a limit on number of symbols

BER is the value of ratio of number of bits erroneously received and the total number of bits received. A typical BER vs. SNR curve shows a logarithmic BER range from 1E0 to 1E-6. Thus the minimum BER resolution that the chart needs to successfully display the desired results is 1E-6. This implies that such resolution can be obtained only if the receiver finds only 1 bit error in 1 million bits. However this is never the case practically and generally the error bits are much more than 1. Hence the resolution is much lesser. But as it is, the simulation model finds it difficult and time consuming to handle more than 10,000 symbols. And other way round 10,000 symbols can provide resolution only upto 2E-4 for QPSK modulation (or other quadrature schemes). Thus the hardware and software limitation that puts a restrain on the number of symbols generated ultimately affects the BER resolution of the curves. This can be observed easily from all the curves displayed in the report as most of them are out of the keeping once the BER crosses 1E-4 limit.

 Storing Complex Numbers

A solution to decrease the processing time substantially is to skip the process of generation of symbols and even the generation and application of fading profile. Instead the data is obtained from a stored file. It is found that this method yields results faster without any considerable loss in accuracy. LabVIEW has the feature of storing data to a spreadsheet. However it supports only double data type, but the simulation model requires storing complex data type.

 Flatten to string; pointer limitation

 An alternative to storing the complex data to a spreadsheet file to store the data in the form a string. LabVIEW has the feature ‘Flatten to String’ which converts any type of data to string data. This string data can be stored as a text file. However when this file is opened the data memory gets corrupt. The reason for this yet unknown but the possible answer to it is the pointer’s limitation. LabVIEW’s documentation says that when data is read from the text file the data should not exceed the 32-bit pointer’s range, failing to which it gives an error.