Hi,
I am trying to understand the performance difference between code running on a desktop machine and on a RMC-8354.
The routine is a spike filter for an array of data, basically does *result++ = *source++ (with some extra stuff) over 1000 points.
This code runs on the desktop machine in 10 microseconds. The same code runs on the RMC-8354 in 10 MILLISECONDS!
I have tried allocating memory on stack and heap with same results. It looks like an interrupt is generateed for every memory access.
Clearly I'm doing something very silly:smileyembarrassed. Any help would be appreciated.
Thanks,
desiko