Hello all,
One of my experiments has parts of its inputs running on a NI 9215. I use the last NiDaqMx in my CVI code. The people running the acquisitions have noticed that each channel has its own offset, typically for the 4 channels: 0.08V, 0.11V, 0.08V and 0.13V which seems a lot. As a test, from the NI Max interface, I created a new task and ran it and the offsets there are much lower approx: 200uV, -200uV, 400uV, -100uV.
First question: why are the offsets different between my CVI prog and the test task ? In my code I use DAQmxCreateAIVoltageChan(Task, AIchan, ChanName, DAQmx_Val_Cfg_Default, -10, 10, DAQmx_Val_Volts, NULL) and little else to declare the tasks.
Now, I understand the need for regular calibration and I saw the calibration method in the task in NI Max, but I see no way to use the calibration saved there later in my prog (is it possible?). All the NiDaqMx calibration functions (DaqMxSetChanAttribute...) seem to replicate the whole process of manually doing the calibration, but in C. What's the point ? It would make the software (and its use, having to connect a trusted voltage source during program use...) a lot more complicated.
3rd question: why is the only voltage option available Differential Terminal ? Why can't I use single ended or pseudodiff ?
Final question: when looking at calibration options, I found the function NISysCfgSelfCalibrateHardware(), but it belongs to nisyscfg.h. What is the link between NiDaqMx and nisyscfg ? Can I use it or is it something different ?
Thanks.